proj-oot-ootNotes28

very interesting slides on in-memory computation:

http://web.stanford.edu/class/ee380/Abstracts/171108-slides.pdf

looks like they are doing the O(ln(n)) in-parallel-similarity-query thing i thought of, as well as some other stuff

---

The stack cookie (also known as "canary") does not prevent the return address from being overwritten, but it increases the chances that the code notices the overwrite before fatefully following the overwritten return address.

This is heuristic: the idea is that most buffer overflows which end up with overwriting the return address proceed sequentially from a stack buffer, one the "other side" of the cookie slot, thus will also overwrite the cookie. "Keeping the cookie intact" requires that the attacker can somehow make a "jumping overflow" (it happens, but rarely) or can obtain the cookie value so that he can overwrite the cookie value with itself. Obtaining the cookie value is hard since it is normally chosen randomly at execution time (details vary depending on the OS and OS version), and not advertised; in some rare situations, the attacker can obtain the cookie value indirectly from the consequences of another exploitable vulnerability.

In practice the stack cookie makes the attacker's life a bit harder, but for a quite big bit. It is not 100% effective, as a defence mechanism, but it is not trivially worked around either. shareeditflag

answered Dec 19 '13 at 18:04 Tom Leek 140k20281416

add a comment up vote 0 down vote

A stack canary protects the return address on the stack by first checking the canary value before moving the return address from the stack to the EIP. In order to overwritten the return address on the stack with a stack based buffer overflow the canny must be over written. If it is known that the return address on the stack was overwritten then program can exit safely without passing the flow of execution to the attacker.

---

" 12.7. Access permissions

Access permissions are controlled through translation table entries. Access permissions control whether a region is readable or writeable, or both, and can be set separately to EL0 for unprivileged and to EL1, EL2, and EL3 for privileged accesses, as shown in Table 12.4.

Table 12.4. Access permissions AP Unprivileged (EL0) Privileged (EL1/2/3) 00 No access Read and write 01 Read and write Read and write 10 No access Read-only 11 Read-only Read-only

The operating system kernel runs in execution level EL1. It defines the translation table mappings, which are used by the kernel itself and by the applications that run at EL0. Distinction between unprivileged and privileged access permissions is required as the kernel specifies different permissions for its own code and for applications. The hypervisor, which runs at execution level EL2, and Secure monitor EL3 only have translation schemes for their own use and therefore there is no need for a privileged and unprivileged split in permissions.

Another kind of access permission is the executable attribute. Blocks can be marked as executable or non-executable (Execute Never (XN)). You can set the attributes Unprivileged Execute Never (UXN) and Privileged Execute Never (PXN) separately and use this to prevent, for example, application code running with kernel privilege, or attempts to execute kernel code while in an unprivileged state. Setting these attributes prevents the processor from performing speculative instruction fetches to the memory location and ensures that speculative instruction fetches do not accidentally access locations that might be perturbed by such an access, for example, a First in, First out (FIFO) page replacement queue. Therefore, device regions must always be marked as Execute Never. "

---

some types of exploit mitigations:

" A virtual-memory region that automatically grows as a result of page faults brings some inherent risks; in particular, it must be prevented from growing into another memory region placed below it. ... In a single-threaded process, the address space reserved for the stack can be large and difficult to overflow. Multi-threaded processes contain multiple stacks, though; those stacks are smaller and are likely to be placed between other virtual-memory areas of interest ... The kernel has long placed a guard page — a page that is inaccessible to the owning process — below each stack area. (Actually, it hasn't been all that long; the guard page was added in 2010). A process that wanders off the bottom of a stack into the guard page will be rewarded with a segmentation-fault signal, which is likely to bring about the process's untimely end."

" As you probably already know... memory addresses from a user-mode applications perspective are virtual. So this means that a NULL pointer can actually be a valid virtual memory address. There have been many exploits taking advantage of NULL pointer dereferences to execute code.

I believe what EMET is doing is pre-allocating memory at the virtual address 0x00000000 and probably setting the protection to PAGE_NOACCESS or PAGE_GUARD. You could then use an exception handler to detect access to this address. "

---

" Yes, Linux supports no less then 4 different scheduling methods for tasks: SCHED_BATCH, SCHED_FAIR, SCHED_FIFO and SCHED_RR.

Regardless of scheduling method, all tasks also have a fixed hard priority (which is 0 for batch and fair and from 1- 99 for the RT schedulign methods of FIFO and RR). Tasks are first and foremost picked by priority - the highest priority wins.

However, with several tasks available for running with the same priority, that is where the scheduling method kicks in: A fair task will only run for its allotted weighted (with the weight coming from a soft priority called the task nice level) share of the CPU time with regard to other fair tasks, a FIFO task will run for a fixed time slice before yielding to another task (of the same priority - higher priority tasks always wins) and RR tasks will run till it blocks disregarding other tasks with the same priority.

Please note what I wrote above is accurate but not complete, because it does not take into account advance CPU reservation features, but it give the details about different scheduling method interact with each other. "

---

" ASN.1 is a very tricky thing to pull off well ▫ Multiple vulns in OpenSSL?, NSS, ASN1C, etc • LibDER? itself actually rather solid "

---

someday need to write a simplified/subsetted YAML. There are a few of these already, must check my notes for them

also a lightweight markup language

---

"The RISC-V Foundation has launched the J extension working group today, to better support managed-language support to RISC-V."

---

ausjke 1274 days ago [-]

This is well known in IoT? field I assume, I used it a few years back, after comparing with tinyos and such. Basically you have Linux, then FreeRTOS?, then Contiki, from large system to the tiny devices.

kqr2 1274 days ago [-]

Based on this embedded systems survey [1], FreeRTOS? is actually quite popular.

http://www.freertos.org/

[1] http://www.eetimes.com/document.asp?doc_id

---

seL4 vs FreeRTOS? vs Contiki:

" >>>> - FreeRTOS? is a generic Real Time Operating System, while Contiki is >>>> not, it has no conventional threads nor preemption. >>>> >>>> - FreeRTOS? is a generic OS, it runs uIP only. While Contiki is a >>>> 802.15.4 OS, specific tailored to bring 802.15.4 networks to life, FreeRTOS? >>>> doesn't do that by default "

"This is well known in IoT? field I assume, I used it a few years back, after comparing with tinyos and such. Basically you have Linux, then FreeRTOS?, then Contiki, from large system to the tiny devices."

others: TINY OS, RIOT

this comment says Contiki is leading tiny OS:

" It's interesting to see that Contiki is taking the lead in this space, since it was once going toe-to-toe with another open source OS for wireless embedded networked devices, TinyOS?. TinyOS? had a large following in the research community and I believe it was used in several commercial sensor network deployments by Dust Networks and Arch Rock and at least one other Korean startup company -- that i believe is still using it in their deployments. "

" Contiki and Tiny OS are not real time systems, but their presence on the market is domin ant (mainly in wireless sensor networks) and therefore they are used for reference. The same reason applies to Linux. "

" Programming models with Contiki and TinyOS? are defined by events in a way that all tasks are executed in the same context, although they offer a partial multithreading support. Contiki uses a programming language similar to C, but can’t use certain keywords. TinyOS? is written in the language called nesC, which is similar but not compatible with the C language. Linux, on the other hand, supports true multithreading, it is written in standard C, and it offers support for various programming languages

TinyOS? version 2.1 introduces TOSThreads, fully preemptable user-level applicati on threads library [4], in which programmer can use C and nesC APIs, but outside of that still has to use only nesC.

A TinyOS? has to be present in the form of source code or as a library during the compilation of user programs (static linking), providing a common binary program which is then programmed into device. This approach simplifies some things, such as better resource usage analysis, or more efficient optimization. On the other hand, changes in customer applications require a redistribution of the entire operating system. Unlike TinyOS?, Contiki has the ability to load individual applications or services during the execution of the operating system on the device, which resembles the mechanisms on general purpose computers. On top of Contiki basic event- driven kernel other execution models can be used. Instead of the simple event handlers processes can use Protothreads [5]. Protothreads are simple forms of normal threads in a multi-threaded environment. Protothreads are stackless so they save their state information in the private memory of the pr ocess. Like the event handler Protothreads can not be preempted and run until the it puts itself into a waiting state until it is scheduled again. Along with event-driven kern el, Contiki also contains a preemptive multithreading library. It is statically linked with application program only if the program explicitly calls some of its functions. However, each thread from this library must have its own stack, which is not the case in Protothreads [6 ].

"

min RAM and ROM according to Choosing the right RTOS for IoT platform (2015):

min RAM:

min ROM:

FreeRTOS?: "Typically a kernel binary image will be in the region of 4kB to 9kB [7]."

ChibiOS?/RT: "The kernel size with all the subsystems activated weighs around 5.5kB for STM32 Cortex-M3 microcontroller"

RT-Thread: "The kernel itself occupies as low as 3kB in ROM and 1kB in RAM"

Erika Enterprise: "minimal memory footprint of 2kB, up to more complete configurations"

" The greatest disadvantage of freeRTOS is that it doesn’t implement advanced mechanism for shared resources like priority ceilings to avoid priority inversion. "

so it sounds like the small ones mentioned here are FreeRTOS?, RIOT, TinyOS?, ChibiOS?/RT, RT-Thread, Erika

https://hal.inria.fr/hal-01245551/file/IoT-OS-survey.pdf classifies small devices via RFC 7228 into class 0 (<<10k RAM, <<100k ROM), class 1 (~10k RAM, ~100k ROM), class 2. Table I mentions the following prominent class 0 and 1 FOSS systems (their classification is actually a little more nuanced than this):

(they also mention other 'not as prominent' systems: class 0: nanoRK, Nut/OS, class 1: ChibiOS?/RT)

they also mention that the L4 microkernel family, including seL4, is too heavy for class 1; except for F9.

they also mention that uClinux is too heavy for class 1.

they say:

" Contiki [40], [41]: Contiki was originally developed as an OS for WSNs running on very memory-constrained 8-bit MCUs, but now also runs on 16-bit MCUs and modern IoT? devices based on the ARM 32-bit MCUs. It is based on an event-driven, cooperative scheduling approach, with support for lightweight pseudo-threading. While being written in the C programming language, some parts of the OS make use of macro-based abstractions (e.g., Protothreads [42]), and in effect require developers to consider certain restrictions as to what type of language features they can use. Contiki code is available under BSD license on GitHub? 3 and other platforms, while a large variety of forks are developed independently (including many closed source versions of the OS). Contiki features several network stacks, including the popular uIP stack, with support for IPv6, 6LoWPAN?, RPL, and CoAP?; and the Rime stack, which provides a set of distributed programming abstractions. Contiki is developed since 2002, and is so far one of the most used open source OSs for constrained nodes.

2) RIOT [43]–[45]: RIOT was developed with the partic- ular requirements of IoT? in mind and aims for a developer- friendly programing model and API, e.g. similar to what is experienced on Linux. RIOT is a microkernel-based RTOS with multi-threading support, using an architecture inherited from FireKernel? [46]. While the OS is written in C (ANSI99), applications and libraries can also be implemented in C++. The source code is available on GitHub? 4 under LGPLv2.1. RIOT features several network stacks, including its own im- plementation of the full 6LoWPAN? stack (the gnrc stack), a port of the 6TiSCH? stack OpenWSN? [47], and a port of the information centric networking stack CCN-lite [48]. RIOT is developed as such since 2012, by a growing, world-wide open source community.

3) FreeRTOS? [49]: FreeRTOS? is a popular RTOS which has been ported to many MCUs. Its preemptive microkernel has support for multi-threading. It is now developed by Real Time Engineers Ltd. and its code is available on the project page under a modified GPL that allows commercial usage

with closed source applications (only the kernel has to remain open source). Although it does not provide its own network stack, third-party network stacks can be used for Internet connectivity. FreeRTOS? is developed since 2002, and is so far one of the most used open source RTOSs for constrained nodes.

4) TinyOS? [50]: Together with Contiki, TinyOS? is the most prominent OS for WSN applications, targeting very constrained 8 bit and 16 bit platforms and is known for its sophisticated design. TinyOS? and nesC evolved language prim- itives and programming abstractions to prevent as many bugs as possible through software structure and enhance memory efficiency by reducing the actual linked code to a minimum. However, the rather complex design in combination with a customized programming language makes it hard to learn, and it is thus lacking a bigger developer community [51]. It follows an event-driven approach, where several components or modules can be virtually wired, as described by configu- rations according the requirements. It is written in a dialect of the C programming language, called nesC . Its source code is available online under the BSD license on GitHub? 5 . The included BLIP network stack implements the 6LoWPAN? stack. TinyOS? is developed since 2000, and is so far one of the most used open source OSs for constrained nodes, with Contiki.

5) OpenWSN? [47]: OpenWSN? comprises a 6TiSCH? net- work stack, a basic scheduler, and a Board Support Package (BSP) i.e., a simple hardware abstraction, making it possible to run OpenWSN? on a dozen IoT? hardware platforms. As such, OpenWSN? is more of a network stack than a full-fledged OS. OpenWSN? code is available online under the BSD license on GitHub? 6 . The main focus of OpenWSN? is the 6TiSCH? network stack, including an implementation of the IEEE 802.15.4e MAC amendment [26]. OpenWSN? is developed since 2010, by a growing, world-wide open source community.

6) nuttX [52]: The nuttX OS aims for full POSIX and ANSI compliance and supports MCUs ranging from 8 bit up to 32 bit architectures. NuttX? can be built as a microkernel as well as a monolithic version. It is highly modular and features real-time capabilities as well as a tickless scheduler. The source code is available under BSD license on Sourceforge 7 . The integrated network stack includes support for IPv4 and IPv6 with various upper layer protocols. NuttX? is developed since 2007.

7) eCos [53]: The embedded configurable operating system (eCos) supports 16, 32, and 64 bit embedded hardware. eCos code is available under a custom license based on GPL with linking exception (acknowledged by FSF). While the open source version of eCos seems rather inactive, the commercial version (eCosPro by eCosCentric) is under active develop- ment. eCos does not provide an own network stack per se, but supports third-party network stacks (lwIP and the FreeBSD? network stack). The source code is available in a Mercurial repository 8 . eCos is developed since 2002, but parts of the code-base are older.

they divide OSs into 3 categories:

Event-driven OSs:

This is the most common approach for OSs initially de- veloped to target the domain of WSNs, such as Contiki or TinyOS? for instance. The key idea of this model is that all processing on the system is triggered by an (external) event, typically signaled by an interrupt. As a consequence the kernel is roughly equivalent to an infinite loop handling all occurring events within the same context. Such an event handler typically runs to completion. While this approach is efficient in terms of memory consumption and low complexity, it imposes some substantial constraints to the programmer e.g., not all programs are easily expressed as a finite state machine [40]. OSs that fall in this category include Contiki, TinyOS?, and OpenWSN?. Because of its wider deployment and use (to the best of our knowledge), Contiki is arguably a good representative of this category of OS.

Multi-Threading OSs:

Multi-threading is the traditional approach for most modern OSs (e.g. Linux), whereby each thread runs in its own con- text and manages its own stack. With this approach, some scheduling has to perform context switching between the threads. Each process is handled in its own thread and can, in general, be interrupted at any point. Stack memory can usually not be shared between threads. Hence, a multi-threading OS usually introduces some memory overhead due to stack over- provisioning and runtime overhead due to context switching. Operating systems that fall in this category include RIOT, nuttX, eCos, or ChibiOS?. Because of its stronger focus on IoT? requirements (to the best of our knowledge), RIOT is arguably a good representative of this category of OS.

Pure RTOSs:

An RTOS focuses primarily on the goal of fulfilling real- time guarantees, in an industrial/commercial context. In this context, formal verification, certification, and standardization are usually of crucial importance. To allow model checking and formal verification, the programming model used in such OSs typically imposes strict constraints for developers. These restrictions often makes the OS rather inflexible and porting to other hardware platforms may become rather difficult. Oper- ating systems for IoT? devices that fall in this category include FreeRTOS?, eCos, RTEMS, ThreadX?, and a collection of other commercial products (generally closed source). FreeRTOS? is to the best of our knowledge the most prominent open source RTOS for IoT? devices, due to its wider use in various environments.

they then present detailed notes on one representative from each of these: Contiki, RIOT, FreeRTOS?.

again the link is: https://hal.inria.fr/hal-01245551/file/IoT-OS-survey.pdf

" Contiki (event-based):

At runtime, all processes share the same memory space and privileges with the core system. ... cooperative thread scheduling ... For memory allocation, Contiki is designed primarily for static allocation...third-party dynamic allocation modules for Contiki, which implement the standard C malloc API ... programming model is based on Protothreads , which is a sort of light-weight, cooperative threading concept similar to continuations [42]. The main programming language sup- ported by Contiki is C, but there exist runtime environments that enable development in languages such as Java [92] and Python [93]. ... provides features such as a shell, a file system, a database management system, runtime dynamic linking, cryptography libraries, and a fine- grained power tracing tool. ... Multiple real-world deployments are based on Contiki, and it is widely used in commercial IoT? products, as well as in academic research on WSN and other types of constrained wireless multi-hop networks. Along with TinyOS?, Contiki has become one of the most well-known and widely used OSs for WSN

RIOT (multithreading):

RIOT is based on a microkernel architecture with full multi-threading. Since multi-threading typically introduces run-time as well as memory overhead, particular efforts were put into designing efficient context switching, IPC (blocking and non-blocking), and a small thread control block (TCB). As a result, context switching in RIOT is achieved in a small number of CPU cycles (e.g., less than 100 CPU cycles on an ARM platform when triggered from interrupt context) and the TCB is reduced to 46 bytes on 32 bit platforms, for instance. RIOT provides a tickless scheduler that works without any periodic events. Whenever there are no pending tasks, RIOT will switch to the idle thread, which can use the deepest possible sleep mode, depending on peripheral devices in use. Only interrupts (external or kernel-generated) wake up the system from idle state. RIOT supports both dynamic and static memory allocation . However, only static methods are used within the kernel, which enables RIOT to fulfill deterministic requirements, by enforcing constant periods for kernel tasks (e.g., scheduler run, inter-process communication, timer operations) ... programming model in RIOT follows a classical multi-threading concept with a memory-passing IPC between threads. Its kernel is written in C (with minor parts being implemented in assembler). However, both C and C++ are available as programming language for applications and application libraries. ... On the system side, RIOT focusses on implementing stan- dard interfaces like POSIX.

... FreeRTOS? (real-time):

In contrast to many other RTOSs, FreeRTOS? is designed to be small, simple, portable, and easy to use. Therefore, it is supported by a large community ...

a fairly simple architecture , as it comprises of only four C files and is more a threading library than a full-fledged operating system. The only provided functionalities are thread handling, mutexes, semaphores, and software timers. In the default configuration FreeRTOS? uses a preemptive, priority based round-robin scheduler , which is triggered by a periodic timer tick interrupt. Since version 7.3.0 (released October 31 2012) the scheduler further supports a tickless mode. In order to fulfill real-time guarantees, it is ensured that FreeRTOS? uses only deterministic operations from inside a critical section or interrupt. In FreeRTOS?, queues are used for IPC which support blocking and non- blocking insert (using deep copy), as well as remove functions ...

FreeRTOS? defines five different memory allocation schemes: (i) allocate only, (ii) allocate and free with a simplistic, fast algorithm, (iii) wrapping C library malloc() and free() for thread safety, (iv) a more complex but fast allocate and free algorithm with memory coalescence, and (v) a more advanced version of (iv) that allows to span the heap over several memory sections. ...

multi-threading programming model with statically instantiated tasks. The programming language used for the OS itself is C, which enables users to integrate it seamlessly also in any C++ application. As stated above, the feature set of the basic system is limited to scheduling, threading and SW timers.

...

Multi-threaded OSs are technically closest to Linux, and within this category, RIOT is currently the most prominent open source OS. Event-driven OSs use a different programming paradigm to fit on devices with even less re- sources, and within this category, Contiki is currently the most prominent open source OS. RTOSs focus on guarantees for worst-case execution times and worst-case interrupt latency. In this category, FreeRTOS? is currently the most prominent open source OS "

all three of RIOT, Contiki, FreeRTOS? can target an MCU w/o MMU with <32 kB RAM

regarding TinyOS? vs Contiki:

" Together with Contiki, TinyOS? is the most prominent OS for WSN applications, targeting very constrained 8 bit and 16 bit platforms and is known for its sophisticated design. TinyOS? and nesC evolved language prim- itives and programming abstractions to prevent as many bugs as possible through software structure and enhance memory efficiency by reducing the actual linked code to a minimum. However, the rather complex design in combination with a customized programming language makes it hard to learn, and it is thus lacking a bigger developer community [ "

"OpenWSN? is more of a network stack than a full-fledged OS"

https://devopedia.org/iot-operating-systems

:

" As a thumbrule, a system that consumes less than 16KB of RAM and Flash/ROM does not require an OS. Such systems most often run on 8-bit or 16-bit MCUs. With such systems, we can get away with a single event loop that polls and processes events as they occur. But if more complexity is added to the system, the response times will be limited by the worst-case processing time of the entire loop " (cites Lamie, William. 2015. "The Benefits of RTOSes in the Embedded IoT?." EE Times. September 14. Accessed 2017-03-13.)

" Sensor nodes will have less than 50KB of RAM and less than 250KB of ROM. Contiki requires only 2KB of RAM and 40KB of ROM. Similar numbers are quoted for Mantis and Nano RK. Zephyr requires only 8KB of memory. Apache Mynewt requires 8KB of RAM and 64 KB of ROM. It's kernel takes up only 6KB. Communication protocols typically take up 50-100KB of ROM. "

"

What design techniques are used by IoT? OS?

TinyOS? and ARM mbed use a monolithic architecture while RIOT and FreeRTOS? use a microkernel architecture. ARM mbed uses a single thread and adopts an event-driven programming model. Contiki uses protothreads. RIOT, FreeRTOS? and µC/OS use a multithreading model. In static systems (TinyOS?, Nano RK), all resources are allocated at compile time. Dynamic systems are more flexible but also more complex. File systems may not be required for the simplest sensor nodes but some OS support single level file systems. With respect to power optimization, this is done for the processor as well as its peripherals. "

---

on FreeRTOS? vs seL4:

" FreeRTOS? does not use protected/kernel mode, nor does it switch address spaces, rather everything (user processes and system services) run in a single address space. As mentioned earlier this will always give the best performance, but you sacrifice any kind of isolation or security. "

[1]



" RISC-V ISA defines two major interrupt types: global and local. Basically, global interrupts are designed for multicore environments, while local interrupts are always associated with one specific core. Local interrupts suffer less overhead as there is no need for arbitration (which is the case of global interrupts on multicore systems). "

---

" (From the Erlang website (http://erlang.org/faq/implementations.html): "Getting Erlang to run on, say, an 8 bit CPU with 32kByte of RAM is not feasible. People successfully run the Ericsson implementation of Erlang on systems with as little as 16MByte of RAM. It is reasonably straightforward to fit Erlang itself into 2MByte of persistant storage (e.g. a flash disk).") "

---

" But even with PyPy?'s magical performance, we still have the GIL. Python doesn't allow you to execute CPU-bound Python code on multiple threads. If you are CPU bound, you need to offload that work to an extension (which releases the GIL when it executes hot code) or you spawn multiple processes. Since Mercurial needs to run on Windows (where new process overhead is ~10x worse than POSIX and is a platform optimized for spawning threads - not processes), many of the potential speedups we can realize via concurrency are offset on Windows by new process overhead and Python startup overhead. We need thread-level concurrency on Windows to help with shorter-lived CPU-bound workloads. This includes things like revlog reading (which happens on nearly every Mercurial operation). "

---

bostik 4 days ago [-]

As much as a I like Go in general, there are two pythonic things that, IMO, really should go into the next major revision.

1: sets; in this day and age, not having sets as native, stdlib-provided data types is simply unacceptable.[ß]

2: The ease of python's "in" keyword; being able to test for a key in a map (or set!) without any indirection is crucial

ß: granted, IIRC python moved sets into native stdlib types only somewhere in 2.x but better late than never

reply

c-cube 4 days ago [-]

Sets don't need to be native in languages that are both fast and have a proper type system. For example, rust and OCaml have a standard set in their stdlib, but it's not builtin. Of course, go lacking generics makes it impossible to implement a decent set type in it.

reply

---

" Uglier than a Windows backslash, odder than ===, more common than PHP, more unfortunate than CORS, more disappointing than Java generics, more inconsistent than XMLHttpRequest?, more confusing than a C preprocessor, flakier than MongoDB?, and more regrettable than UTF-16, the worst mistake in computer science was introduced in 1965. "

---

jorblumesea 2 days ago [-]

Personally found Webpack to be really heavy handed for what most developers need it for. Gulp on the other hand seemed easier to just build exactly what you need. Webpack felt like an overly complicated poorly documented black box that seemed to work by sifting through github issues and comments. You spend 2 days getting it into some semblance of a working state, and never touch it for fear of introducing a new issue. Gulp just...worked, and with very little fuss. Sadly React seemed tightly tied to webpack so if you want React you're tied to that toolchain.

In general I think tooling needs to improve, and neither Webpack nor Gulp are painless. Neither is maven, I suppose...

reply

codefined 2 days ago [-]

I'm not sure, my opinion of Webpack is completely different.

We used a ~20 line configuration file found on Gist to replace our ~400 line gulp build. It resulted in better optimisation, less hassle and "Just Worked™".

We wanted to add a few more things to our configuration, literally just took running `npm install some_package`, followed by requiring it in the webpack configuration.

reply

---

ggregoire 2 days ago [-]

How is React a pain to learn? It's the most simple framework I've ever learned. I can count all the methods/concepts on my fingers: components, state, props, setState, render and 5 or 6 lifecycle methods you occasionally use (componentDidMount and so on). The rest is vanilla JS.

I coded with Angular 1 for 2 years and had to open the doc each time I wanted to use ngRepeat…

reply

---

txmjs 1 day ago [-]

I've been using Vue.js on large scale projects for a while, but for a greenfield project I'm choosing React instead. React has better support for SSR which is still really important for public facing websites, JSX feels much more natural to me than the v-* directives, not to mention React's incredibly large ecosystem. That being said, I really like the way Vue handles CSS in single file components, and the simplicity of Vuex compared to Redux. I'm still struggling to find a styling solution for React that I'm completely happy with. As with everything, it's about choosing the right tool for the job.

reply

CharlesW? 1 day ago [-]

> React has better support for SSR…

Mind elaborating on this? I'm about to start a project that will use SSR, and I'm really interested where Vue (which would otherwise be my preference) might limit me.

https://ssr.vuejs.org/en/

reply

Can_Not 1 day ago [-]

https://nuxtjs.org/ is a front end framework with first class support for SSR for VueJS? that is almost about to release their 1.0. SSR compatibility was one of the major pillars of VueJS?'s 2.0 release.

reply

---

 mtpn 2 days ago [-]

I’ve heard this so many times, and don’t get me wrong I really like Vue, but I’m so surprised people find it easy to learn compared to the others. It wasn’t an uphill battle but I found getting started with vue to be about the same, if not little more overhead, than react, ember, or angular 1, all of which I’ve played with a little.

reply

---

megaman22 2 days ago [-]

The whole front-end build process ecosystem seems hopelessly complicated and frustrating to me, but I'm a hoary, old, nearly thirty, backend developer. I've yet to have anyone satisfactorily explain why grunt or gulp or webpack is better than makefiles and shell scripts, other than "But its JavaScript?!" Every month or so, dependencies break, and everything changes; it seems like a lot if work for very little result.

reply

batmansmk 2 days ago [-]

I hear you. Let me share you my findings, knowing that with 10 years only of experience, I found great comfort in this stack. It is not perfect, but it is indeed better than the previous one for me.

Portable - run on mac, windows, linux, others.

Embedded - ship with the project.

JS friendly / Integrated - for instance you can read and write from JSON/JS/APIs easily. Tons of plugins adapted to web and mobile related concerns. Try to minify CSS with make?

Adapted to modern hardware constraints - for instance, sharing global deps is not the default best practice anymore since HD space is not at premium anymore. Multicore/multithread is the default option, not behind a flag. I/O is the modern constraint, not RAM.

Limited knowledge to get started - imagine it is the first app you are making in JS, you just learned the diff between the browser and the server ;). Gulp takes you from where you are, make requires knowledge of Linux, bash, env variables, probably apt-get/yum, make itself. A place you have never been before.

Modern doc - it's my opinion, but man is an awful doc interface. No quickstart, it has a TOC, not a menu and no search bar sig.

Innovative 'cause no legacy - we wouldn't have investigated hot reloading, smart watch mode, "better" pkg mgmt.

Easier to debug.

No context switch ...

reply

always_good 2 days ago [-]

Well, portability, for one.

Also, Webpack is a sequence of one-liner incantations that do things that would be annoying to by hand, like starting a websocket server to talk to your application to hot-swap the code.

The declarative config is an improvement over Gulp and Makefiles for that reason. Same reason I prefer pom.xml over build.gradle.

reply

---

zoom6628 7 hours ago [-]

Much as i would love for the power of Python in Excel it is important that whatever is done is consistent across the office experience. Some of us old enough to remember the multiple versions of VB-whatever across Excel, Word, Access and that in itself was a blow to productivity.

Yes they should choose Python, and in the process decide if it will be Python with a .Net library (standard and core as seperate libs please!) or IronPython?. This in itself is an important first choice.

Then it has to be done in a mechanism that enables the exact same libs and user written python code to work in the same way across all the Office products.

Other languages have been suggested, all good choices with their own merits. Lua is a good chocie for compactness and speed. C# is lingua franca of commercial developers so would suit ISVs but i think too heavy for end user scripting. Nim or even freepascal maybe? Lastly whatabout just using VB.net consistently? VB is a great language for newbs and casual/adhoc programmers .... but it gets ignored because of problems with consistency of implementation by MS.

Last point i would like to make is IMHO the choice needs to be based on: - ability to transpile to javascript so that Excel365 can be scripted from webapps - install-free deployment; should be built into Excel in a way it can be used without any user install for dev or runtime - standard vanilla language, not a variant

Disclaimer: A huge fan and long time user of python here. I also spent largest part of working life in ISV world rather than end user land.

reply

sametmax 2 hours ago [-]

The reason Python is a popular choice is because it's a very polyvalent language (not specialized for any task but quite good at a lot of them) BUT the data analysis tookit in the Python ecosystem rocks (numpy, pandas, etc). Lua doesn't have a good data anaylisis story. C# is not made for scripting. VBA is not great outside of Office.

Now given that cPython 3.6 has f-string, and that you probably will use string formatting a lot if you script a MS product, I would implement this version specifically. It's the most recent stable version of the standard implementation anyway.

Actually no, I would not implement anything. I would embed the cPython 3.6 interpreter and stdlib, and just provide a binding to it. This limit the problem of bad / inconsistent implementations that crippled MS products in the past.

Then I would say "we guaranty to provide a 3.6 compatible cPython + stdlib for the next x years" so that people can be confident to write Python stuff. Otherwise, a bad implementation would be worst than no implementation at all.

reply

---

ptx 55 minutes ago [-]

> VB is a great language for newbs and casual/adhoc programmers

Is it, though? For example, VB puts four different ways of passing arguments right in your face: values by reference, values by value, object references by reference and objects references by value. Python only has the last one and doesn't make you think about it.

And without "Option Explicit" VB does something very unhelpful (also seen in PHP): misspelled variables are created on read and silently converted to the required type, giving you nonsense results. (With "Option Explicit" enabled it gets verbose and tedious instead.)

reply

---

noonespecial 6 hours ago [-]

The best technologies can be understood and used in a simple case by a beginner but still "unfurl" to handle the general case.

The worst force you to embrace the entire complexity before you can even hello world.

Progress is being made.

reply

---

https://www.databasesandlife.com/the-cycle-of-programming-languages/

" The following cycle never ceases to amaze me:

    People learning programming find "real" languages such as C++ or Java filled with too many "complex" constructs.
    They find or invent languages such as Javascript or PHP or BASIC and think they can get the job done without "unnecessary complexity"
    As these programmers develop, they develop increasing complex programs, and find that constructs such as classes, inheritance, exceptions, generics/templates, errors upon encountering undefined variables, and static typing help them debug their code and write better code quicker.
    They then add these features to their programming languages and everyone rejoices believing they've done something new and great.
    Other programmers – just starting out – find the current set of languages to be too complex as they contain features they don't understand they need, such as classes, inheritance, exceptions, etc.: go to step 1.

"

---

phkahler 21 hours ago [-]

One problem with standards organizations is that they have a bunch of people whose job is to make new versions of the standard. Look at the evolution of openGL, the web, C++, Unicode (love hotel? Poop emoji?) and so on. Even ASCII evolved to 8bit and the got ansi escape codes, but it's stable now. C is fairly stable now but that's because all the new stuff is in C++ (much like HTML replaced ASCII).

reply

Tobba_ 5 hours ago [-]

What's wrong with the evolution of OpenGL?? Everything from OpenGL? 3 and on has simply been focused on fixing the API without completely breaking old code, or exposing new GPU features. All of that was necessary.

reply

---

another post about ORMs and the object-relational impedence mismatch:

[2]

---

[–]murbardfounder 14 points 13 days ago

I think it misses the point. Michelson is a VM, but a typed, functional VM. If you want named functions, polymorphism, etc you can use a high level language and compile down to Michelson, e.g. Liquidity or Juvix.

    permalinkembedsavereportgive goldreply

---

djsumdog 10 hours ago [-]

So the web browser is pretty much a mini operating system.

...

ehnto 9 hours ago [-]

I have been thinking about why we have ended up here, and why not just native apps.

The obvious answer is that it makes applications portable, which is great. The other key component I think is delivery. You don't ever install anything, it just exists when you ask for it.

...

modeless 8 hours ago [-]

The answer is that browsers provide something that users desperately need and no operating system has ever provided: a sandbox strong enough to run completely untrusted code. The success of browsers is an indictment of the entire field of operating systems research. They have either failed to recognize the need or simply failed to deliver that kind of security.

For one example take WebGL?. For decades OpenGL? had been a native API prioritizing performance first and security last. It wasn't until browsers decided to expose OpenGL? that the necessary work was done to make a graphics API safe for untrusted code. And who did that work? The browsers had to do most of it themselves.

reply

---

"...compile clang to wasm and have it generate code within the browser..."

Currently missing:

    Global constructors and destructors
    Standard library globals (e.g. cin, cout)
    RTTI, exception handling

---

reply

jhallenworld 22 hours ago [-]

"Meltdown" is an Intel bug.

"Spectre" is very bad news and affects all modern CPUs. Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh. Otherwise assume that the entire process with jit generated code is open to reading by that code.

Any system which keeps data from multiple customers (or whatever) in the same process is going to be highly vulnerable.

reply

kibwen 21 hours ago [-]

> Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh.

Here's the synchronized announcement from Chrome/Chromium: https://sites.google.com/a/chromium.org/dev/Home/chromium-se...

"Chrome's JavaScript? engine, V8, will include mitigations starting with Chrome 64, which will be released on or around January 23rd 2018. Future Chrome releases will include additional mitigations and hardening measures which will further reduce the impact of this class of attack. The mitigations may incur a performance penalty."

Chrome 64 will be hitting stable this month, which means that it ought to be possible to benchmark the performance penalty via testing in Chrome beta. Anybody tried yet?

reply

dannyw 21 hours ago [-]

The mitigations are to disable SharedArrayBuffer? and severely round performance.now(). Not good that there aren’t other less intrusive ways to mitigate.

reply

---