proj-oot-ootNotes22

this guy has one of the only pages that lists both of the libraries state-threads and libmill, both goroutine-like libraries, both mentioned in HN comments but on different stories, by different people:

https://github.com/htoooth/the-guide-of-high-performance-geo-computing/blob/master/chapter/3.md#7-%E5%B9%B6%E8%A1%8C%E7%B1%BB%E5%BA%93

here's the most interesting stuff out of what else this guy mentions (i also copied some libraries he mentioned into my list):

(translation) "Attention: Cobra is very interesting language design testing and contract design into the language of design, it is worth learning."

https://github.com/htoooth/the-guide-of-high-performance-geo-computing/blob/master/image/programlanguage.jpg (visio of that jpg: https://github.com/htoooth/the-guide-of-high-performance-geo-computing/blob/master/image/programlanguage.vsdx )

translation: vertical axis: application level; horizontal: audience level languages mentioned in that: high application level, low audience level: angelscript chaiscript daolang idris squirrel aauto elm io lua newlisp low application level, low audience level: haxe rust d scheme go nimrod haskell julia medium application level, high audience level: ruby (opal crystal) python (rapydscript hylang) js (coffeescript, livescript, typescript, clojurescript) low application level, high audience level: erlang (elixir) java (scala, clojure, kotlin, groovy, ceylon) highest audience level, lowest application level: C/C++

(not sure what he is getting at here with 'audience level')

reference design language

parallel class library

Ltsmin. has a circulation about MPI event mechanism. can learn. Casablance. the library there is a thread called PPLX, cross platform, can refer to the API design Parallel computing based on the Ray.mpi library Theron. the design of parallel implementation mpi-rpc zht-mpi SObjectizer.The SObjectizer is a framework for agent-oriented programming in C++. CAF, the realization of the library design actor TBB,Intel Threading Building Blocks Libprocess.process between actor style Channel.Name Space Based C++ Framework For Asynchronous, Distributed Message Passing and Event Dispatching. C++CSP2. Easy Concurrency for C++. Concurrencykit, high concurrency library. libtask State-threads. thread library, here are some documents. Chan, channel library for pure c implementation Libcoro. a concurrency library Libev. mapreduce c++ Skynet. blog on the use of industrial-grade, multiprocess and multithreading, and embedded scripts, content of the blog here. Fibjs. where there is more than one class, you can learn meguro[Javascript]. A Javascript Map/Reduce framework Disco[python].a lightweight, open-source framework for distributed computing based on the MapReduce? paradigm. mrjob,Run MapReduce? jobs on Hadoop or Amazon Web Services. jug, A Task-Based Parallelization Framework. GraphLab? MapReduce? for c (MR4C), with the aim to optimize its geospatial data and code libraries for computer vision. GeoKettle? is a powerful, metadata-driven Spatial ETL tool dedicated to the integration of different spatial data sources for building and updating geospatial data warehouses. download HPX,A general purpose C++ runtime system for parallel and distributed applications of any scale. dask,Dask provides multi-core execution on larger-than-memory datasets using blocked algorithms and task scheduling. libmill,Go-style concurrency in C. libslack,Libslack is a library of general utilities designed to make UNIX/C programming a bit easier on the eye.

---

https://github.com/fuchsia-mirror/magenta/blob/master/docs/mg_and_lk.md https://github.com/littlekernel/lk/wiki

---

a list of things in C++11:

" A list of features and some discussion can be found on my C++0x FAQ. Here is a subset:

    atomic operations
    auto (type deduction from initializer)
    C99 features
    enum class (scoped and strongly typed enums)
    constant expressions (generalized and guaranteed; constexpr)
    defaulted and deleted functions (control of defaults)
    delegating constructors
    in-class member initializers
    inherited constructors
    initializer lists (uniform and general initialization)
    lambdas
    memory model
    move semantics; see rvalue references
    null pointer (nullptr)
    range for statement
    raw string literals
    template alias
    thread-local storage (thread_local)
    unicode characters
    Uniform initialization syntax and semantics
    user-defined literals
    variadic templates 

and libraries:

Improvements to algorithms containers duration and time_point function and bind forward_list a singly-liked list future and promise garbage collection ABI hash_tables; see unordered_map metaprogramming and type traits random number generators regex a regular expression library scoped allocators smart pointers; see shared_ptr, weak_ptr, and unique_ptr threads atomic operations tuple "

-- http://www.drdobbs.com/cpp/the-c0x-remove-concepts-decision/218600111?pgno=3

" C++11 feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It's easier now: Your ideas will map to enumerations, objects, classes (e.g. control of defaults), class hierarchies (e.g. inherited constructors), templates, aliases, exceptions, loops, threads, etc., rather than to a single "one size fits all" abstraction mechanism.

...

 The result has been a language with greatly improved abstraction mechanisms. The range of abstractions that C++ can express elegantly, flexibly, and at zero costs compared to hand-crafted specialized code has greatly increased. When we say "abstraction" people often just think "classes" or "objects." C++11 goes far beyond that: The range of user-defined types that can be cleanly and safely expressed has grown with the addition of features such as initializer-lists, uniform initialization, template aliases, rvalue references, defaulted and deleted functions, and variadic templates. Their implementation eased with features, such as auto, inherited constructors, and decltype. These enhancements are sufficient to make C++11 feel like a new language.

...

 I would have liked to see more standard libraries. However, note that the standard library definition is already about 70% of the normative text of the standard (and that doesn't count the C standard library, which is included by reference). Even though some of us would have liked to see many more standard libraries, nobody could claim that the Library working group has been lazy. It is also worth noting that the C++98 libraries have been significantly improved through the use of new language features, such as initializer-lists, rvalue references, variadic templates, noexcept, and constexpr. The C++11 standard library is easier to use and provides better performance that the C++98 one.

...

    Machine model and concurrency -- provide stronger guarantees for and better facilities for using modern hardware (e.g. multicores and weakly coherent memory models). Examples are the thread ABI, futures, thread-local storage, and the atomics ABI.
    Generic programming -- GP is among the great success stories of C++98; we needed to improve support for it based on experience. Examples are auto and template aliases.
    Systems programming -- improve the support for close-to-the-hardware programming (e.g. low-level embedded systems programming) and efficiency. Examples are constexpr, std::array, and generalized PODs.
    Library building -- remove limitations, inefficiencies, and irregularities from the abstraction mechanisms. Examples are inline namespace, inherited constructors, and rvalue references.

...

Are there any features you don't like? Yes. There are also features in C++98 that I don't like, such as macros. The issue is not whether I like something or if I find it useful for something I want to do. The issue is whether someone has felt enough of a need to convince others to support the idea or possibly if some usage is so ingrained in a user community that it needs support. "

-- http://www.stroustrup.com/C++11FAQ.html#think

---

" What I do not want to try to do: • Turn C++ into a radically different language • Turn parts of C++ into a much higher-level language by providing a segregated sub-language " -- Thoughts about C++17 by Bjarne Stroustrup

interestingly, i DO want to do the second with Oot

elsewhere, Stroustrup says:

"Were I to design a new language for the kind of work done in C++ today, I would again follow the Simula model of type checking and inheritance, not the Smalltalk or Lisp models." "a couple of language-technical criteria:

i don't know the Smalltalk or Lisp models vs the Simula model too well, so i cant comment on that yet.

for Oot, i agree on "Provide as good support for user-defined types as for built-in types"

for Oot, i don't want to "Leave no room for a lower-level language below C++ (except assembler)", quite the opposite, i want a high-level language

--

" Keep simple things simple for the majority of programmers. Note that auto and range-for loops are invariably near the top of people’s list of useful C++11 features. They are also among the simplest facilities we provided. "

--

what Stroustrop wants for C++17:

"

another list of his in the same paper:

" Improve support for large-scale dependable software: oModules (to improve locality and improve compile time; http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2015/n4465.pdf and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4466.pdf ) oContracts (for improved specification; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4378.pdf and http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2015/n4415.pdf) oA type-safe union (probably functional-programming style pattern matching; something based on my Urbana presentation, which relied on the Mach7 library: Yuriy Solodkyy, Gabriel Dos Reis and Bjarne Stroustrup: Open Pattern Matching for C++. ACM GPCE'13.)

Provide support for higher-level concurrency models: oBasic networking (e.g. asio, http://open-std.org/JTC1/SC22/WG21/docs/papers/2015/n4478.html) oA SIMD vector (to better utilize modern high-performance hardware; e.g., http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4454.pdf but I’d like a real vector rather than just a way of writing parallelizable loops) oImproved futures (e.g., http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3857.pdf and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3865.pdf ) oCo-routines (finally, again for the first time since 1990; https://isocpp.org/files/papers/N4402.pdf, https://isocpp.org/files/papers/N4403.pdf , and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4398.pdf ) oTransactional memoryhttp:open-std.org/JTC1/SC22/WG21/docs/papers/2014/n4302.pdf) oParallel algorithms (incl. parallel versions of some of the STL; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4409.pdf )

Simplify core language use, especially as it relates to the STL and concurrency, and address major sources of errors: oConcepts (for better generic programming; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3701.pdf and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4361.pdf ) oConcepts in the standard library (based on the work done on Rangeshttp:www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4263.pdf ) oRanges (simplifies STL use, among other things; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4128.html and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4382.pdf ) oDefault comparisons (to complete the support for fundamental operations; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4475.pdf and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4476.pdf ) oUniform call syntax (among other things: it helps concepts and STL style library use; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4474.pdf ) 3 Stroustrup Thoughts about C++17 N4492 oOperator dot (to finally get proxies and smart references; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4477.pdf ) oarray_view and string_view (better range checking, DMR wanted those: "fat pointers"; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4480.html ) oarrays on the stack (stack_array anyone? But we need to find a safe way of dealing with stack overflow; http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4294.pdf ) ooptional (unless it is subsumed by pattern matching, and I think not in time for C++17, http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4480.html )

"

-- http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4492.pdf

---

C++ initially included "classes, basic inheritance, inlining, default function arguments, and strong type checking in addition to all the features of the C language." [1]

The first C++ compiler targeted C but was eventually abandoned b/c "it became difficult to integrate new features into it, namely C++ exceptions"

In 1983, "Many new features were added around this time, the most notable of which are virtual functions, function overloading, references with the & symbol, the const keyword, and single-line comments using two forward slashes (which is a feature taken from the language BCPL)." In 1985, "The language was updated again in 1989 to include protected and static members, as well as inheritance from several classes."

Some of the C++11 new features were "regular expression support (details on regular expressions may be found here), a comprehensive randomization library, a new C++ time library, atomics support, a standard threading library (which up until 2011 both C and C++ were lacking), a new for loop syntax providing functionality similar to foreach loops in certain other languages, the auto keyword, new container classes, better support for unions and array-initialization lists, and variadic templates."

-- http://www.cplusplus.com/info/history/

---

niftich 19 hours ago [-]

(Edited to clarify that I think Java was good in the very beginning, less so in the middle, and is good once again)

Java started off as the Golang of the 90s: small, simple, opinionated, statically-typed, in many ways state-of-the-art, and engineered to protect the programmer from doing bad things (either due to ignorance or will). It came with a batteries-included standard library.

Then, in a quirk of fate, it became very popular. Though its library provided clean abstractions, you had to chain several of them together to actually accomplish anything: this is why it was so verbose. Everyone was writing Java, but no one was sharing Java, yet. Before Apache (Jakarta) Commons, no one bothered to make wrappers for basic stuff... just like how we were writing bad JS before jQuery or underscore came out. We were caught between our own experimentation, Sun's desire to take over the desktop, and companies like IBM and Oracle that wanted to ensure they could sell commercial support for big monolithic Java appservers.

When the 'Design Patterns' book came out, someone should've made a framework, but instead we sprinkled AbstractStrategyFactoryBuilders? everywhere because that's what the enterprise libraries did... and it read like a bad parody of object-oriented programming, but by now, we've learned our mistake.

Now that bad APIs have fallen out of favor, Java EE is nearly abandoned, Spring brought some sanity but also forced us to consider why we're even thinking about convention over configuration, Java is much, much nicer. In the Maven, Github, CI world, we can -- environment and company policies permitting -- easily pull in libraries and frameworks that allow us to actually get stuff done at the correct level of abstraction.

reply

hota_mazi 16 hours ago [-]

I feel you haven't taken a look at Java in the enterprise in a long time.

Spring is considered to be an overloaded bloated mess and Java EE has become lean and mean with plenty of very capable specifications and implementations (JPA, CDI, etc...).

And FYI, the Design Patterns book came out before Java (1994 vs/ 1995) so I think you have your timelines confused. The early editions of Design Patterns didn't contain a single line of Java, they were mostly C++ (and some Smalltalk).

Java is twenty years old, and for such an old geezer, it's adapted remarkably well (and between Java 8 and Kotlin picking momentum, its legacy seems to be well assured).

reply

niftich 15 hours ago [-]

I was trying to express a lot of ideas in a small space, and I have edited it since, but it's still not as clear as I'd like. You and I agree, but I have struggled to get this point across.

Spring is a bloated mess, but when it first came out it was lean new framework and a relief from EJBv1 and EJBv2. In response, EJBv3 was much better, but in the meantime Spring became the new normal, and grew into some strange swiss army chainsaw glue with now-deprecated awful xml configuration replaced by the horror of not-quite-code-but-compiled-into-the-classfile @Annotations!

The 'new Spring' is Spring Boot, which was a response to this then-obscure framework called Dropwizard which showed that all you had to do was pick a handful of very good libraries get work done. Guava, Jersey, Jackson, Jetty; some decent ORM, and you're good to go. Some of these are backed by new Java specs like JAX-RS and JPA that are legitimately pretty good.

I think the biggest problem Java still has in the hearts and minds of people outside the 'enterprise' is not because of Java, but because of the complexities that only manifest in environments where your code isn't 100% greenfield every time.

Dependency Injection isn't something you realize you need until you realize you need it -- and then you realize you need it Really Badly.

Externalized XML config files declaring reusable beans isn't something you think you need until your client is asking how they can reconfigure something in the shipped software when they can't just recompile it to what they need.

reply

slantedview 3 hours ago [-]

> The 'new Spring' is Spring Boot, which was a response to this then-obscure framework called Dropwizard

This is correct, but what pisses me off is that the Spring folks clearly copied Dropwizard, then failed to even mention Dropwizard in their various docs, blogs, talks, etc, which you're thinking is fine. But they DO mention other (less useful) projects as supposed alternatives for Spring Boot, essentially, pretending as if Dropwizard doens't even exist when clearly it's the reason that Spring Boot exists! Shameful.

Spring hails Boot as the new way of doing things, but it's still by no means a clean break for the rest of the Spring ecosystem where programming by XML (or annotations) is still king and the APIs are still a horrible, overcomplicated mess. There's no undoing that.

reply

---

hota_mazi 12 hours ago [-]

Annotations are data about the code. The fact that you can't change them without recompiling the code is a feature, not a bug.

If you want that kind of flexibility, use external files (e.g. XML) but now the two can get out of sync, which is what is commonly referred to as XML hell.

The rule of thumb is simple: whenever you need to add information about something in your source (class, method, field, package), use an annotation. If you need to add information about something that is not source code (port, host, various sizes, etc...) then use an external file. In particular, if you specify a reference to Java code in your XML, you should use an annotation instead.

reply

---

can i just use WebAssembly? as the portable target language? No, WebAssembly? is not QUITE easy enough to port, having stuff like int/float 32/64 promotions and stuff like sqrt and ternary ?: in addition to IF. Also, it doesn't specify a simple way to incrementally override library functions with platform-native ones (but i bet it would be easy to have a convention for this).

otoh since WebAssembly? will be more popular than whatever i invent, in practice it may end up being worth it to just use WebAssembly? anyways.

---

i'm thinking about tuples. RPython, which is typed, specifies that a tuple-typed variable must have a fixed length known at compile time. Interestingly, a few days ago i saw the same thing with Python Copperhead.

Interestingly, the RPython docs point out that "There is no general way to convert a list into a tuple, because the length of the result would not be known statically. (You can of course do t = (lst[0], lst[1], lst[2]) if you know that lst has got 3 items.)".

At first glance, this idea of forcing tuples to have a statically-known length sounds good to me, as it simplies things. But i bet there's metaprogrammy cases where you really want to convert a list of unknown length into a tuple.

So this is interesting.

One answer would be to have generics in the type system and then to provide in the language a magic function to make a tuple out of a list. Note that that means that the generic type of the resulting tuple wouldn't be resolved to a non-generic one at compile-time, though, so you'd have to have dynamism in the runtime there (which i'm cool with).

---

i guess Fantom is like Go w/r/t generics?:

" Currently Fantom takes a limited approach to generics. There is no support for user defined generics yet. However, three built-in classes List, Map, and Func can be parameterized using a special syntax. For example a list of Ints in Fantom is declared as Int[] using the familiar array type syntax of Java and C#. This trade-off seems to hit the sweet spot where generics make sense without complicating the overall type system. "

---

i don't agree with all of these but:

" Other little things we included in Fantom which we found frustrating about Java:

    Default parameters: methods can have default arguments - no more writing boiler plate code for convenience methods
    Type Inference: local variables use type inference to avoid all the noise that obscures typical Java code
    Field Accessors: field accessors are defined implicitly - another area where idiomatic Java code has a low signal to noise ratio
    Nullable Types: declare your intention in code whether null is a legal value for a parameter or return type instead of relying on documentation
    Checked Exceptions: checked exceptions are evil syntax salt. Checked exceptions don't scale, don't version, and don't allow composable systems - all the reasons why Anders Hejlsberg didn't include checked exceptions in C#.
    Numeric Precision: Fantom doesn't include support for 32-bit, 16-bit, and 8-bit integers and floats. There is only a 64-bit Int and a 64-bit Float. This eliminates a lot of complexity associated with precision problems such as file lengths, Unicode characters, or very large lists. Although much of the Java and C# implementation is 16-bit chars and 32-bit ints under the covers, the Fantom APIs themselves are future proof.

"

---

https://en.wikipedia.org/wiki/Libquantum

---

" Content-addressable memory is supposed to speed up associative array look-ups. There's a well-known aphorism by Alan Perlis – "A language that doesn't affect the way you think about programming is not worth knowing". Here's my attempt at an aphorism: "A processor that doesn't affect the way you access memory is not worth building".... CAM goes way beyond providing more bandwidth or helping with the addressing – it adds comparison logic to each memory word. While it sounds impractical to replace all of your RAM with CAM, stashing a CAM array somewhere inside your system could help with some problems. "

---

" It's important to remember C was designed for the processors of the time, whereas processors of today (RISC-V included) are arguebly primarily machines to run C. C has brought a lot of good but also a lot of bad that we are still dealing with: unchecked integer overflows, buffer under- and overflow, and more general memory corruption. No ISA since the SPARC even tries to offer support for non-C semantics. "

---

faragon 611 days ago [-]

Well, BSD sockets don't use file abstraction. Plan 9 recovered "Unix philosophy" in that regard. In my opinion.

---

JeffreySnover? 2 hours ago [-]

That is not quite right. At the end of the day it is all about automation. But we took the position that the way Admins automate is by scripting the things that they do in an interactive shell. Bruce Payette had a great way of saying it - he said that the lifecycle of 99% of scripts starts at the prompt and ends at the carriage return. Then, somethings you want to codify things so you put it into a script. Then as the function becomes more used and more important, you want to make it more and more formal without having to throw everything away and start with a new language.

So YES and interactive shell is important but it is important as a stepping stone to automation.

That is the PowerShell? mindset. Jeffrey Snover [MSFT]

reply

lucb1e 33 minutes ago [-]

That is not quite right either. I enter about 200 commands on a normal workday and write maybe two scripts a month in a shell language. A shell language should be a shell language, it's not merely a stepping stone to write a script after using a command just once. If that's what I'm looking for I'll use Python. And from the other side, Python would be a terrible shell (by default), and that's fine because that's not what it's used for.

If Powershell is made for both, it has to make compromises both as a shell language and as a scripting language. This is one of the things that annoy me about the Windows ecosystem: it ships with no languages beyond a shell, which makes scripting for it very annoying, and you can't even install a language of choice via a package manager (one has to go find a download somewhere on the web).

reply

---

 ldiracdelta 3 hours ago [-]

I don't see a tremendous amount of accumulated wisdom in Powershell. It may be newer, but I'm not sure they looked at prior art or had design members who were shell gurus in any other shell than `cmd.exe`, which is a horrific shell. Why are commands horrendously long in powershell? If you spend time in the shell, you don't want your fingers falling off due to overuse. Aside... one of the design decisions Microsoft took that absolutely blows my mind is using `\` for file hierarchies instead of `/` when you're writing an operating system in C!

reply

JeffreySnover? 2 hours ago [-]

There are lots of aliases for cmdlets for exactly this reason - interactive use. The verbose commands are useful when reading a script that isn't working at 3am in the morning when your boss is breathing down your neck to fix it "stat!".

RE - your aside '\' vs '/' - I think most of us still shake our head at that decision.

Jeffrey Snover [MSFT]

reply

---

Dec. 27th, 2015 05:50 am maxim_sokhatsky: (buka) Robert Harper, Author of Practical Foundations for Programming Languages

SML is clean, simple, elegant, and effective. No categorial bullshit field, just solid functional programming.

https://www.quora.com/Are-there-any-minimal-functional-programming-languages

    Add Memory Share This Entry 
    Link 9 comments Reply 

Bringing Lambda Cube to Erlang Dec. 25th, 2015 08:13 pm maxim_sokhatsky: (buka) TL;DR: The compiler pipeline of modern functional language with pure type system core.

As you know compilers pipeline can consist of up to 30 [Erlang] AST passes and transformations, however obviously the idea of having compact and simple compiler foundation leads us to understanding and simple reasoning of its implementation. Erlang shows us that even having such a long pipeline you still could gain the positives of untyped lambda calculus, namely compilation speed. The speed is what we want to see in the final prototype. Our pipeline from the first approximation could be seen as three layers:

Note that only AST transformations are applied, no code generation is involved as we totally rely on Erlang compiler foundation. The Idris idea of having the one language for both reasoning and general programming is present in our pipeline. Code extraction is just type erasure and holes (typed Effects in a form of Free Monad) fulfilling. Tags:

    cs, erlang, lambda, pi 
    Add Memory Share This Entry 
    Link 6 comments Reply

---

"untyped lambda calculus is used as an extraction target for many provers (Idris, F*), and also manifests in different domain languages (JavaScript?, Erlang)." [2]

---

the commenters in this page make a good argument against putting in the effort to compile one language to readable code in another:

https://www.reddit.com/r/haskell/comments/3pmtj0/a_subset_of_haskell_compiled_to_maintainable_code/

however i would counter that this isnt useful in itself so much as it's useful for interoperability; to be able to pass complex data structures, first class functions, etc between modules written in different languages. Also, you often see API wrappers that are basically cross-compiled from one language to another.

---

Morte deals only with total functions (always terminating programs): [3]

---

to review some of the reasons that existing languages are not good enough, and we need Oot:

Haskell is too verbose and hard to read, is not gradually typed, is not portable enough, and is not good enough on the commandline

Python uses too much mutation and aliasing and eagerness in its stdlibs, cannot do Quicksort as nicely as Haskell, has the GIL, does not have macros (by default), is gradually typed, is not portable enough, and is not good enough on the commandline

Lisp is too verbose, has too much mutation in its stdlibs, does not have a statically typed library ecosystem.

---

" let’s assume that you wanted to get a list of names of all files in a directory and then do something with those names. In UNIX, you’d have something like:

for f in *; do echo "Processing $f file..."; done

While in PowerShell?, you’d go with something similar to:

Get-ChildItem? "."

Foreach-Object { $name = $_.Name Write-Output "Processing $($name) file..." }

An equivalent functionality in Python can be achieved with:

from os import listdir

for f in listdir('.'): print('Processing {} file...'.format(f)) "

---

some language features that some guy likes:

https://docs.google.com/presentation/d/15MWKV5aVxX-SVNvcSwn2e5hvbsuS8Jkx5pA_Z2m22dw/pub#slide=id.g370b76a80_092

"

and some he likes from Swift: "

---

notes on things i think are interesting from https://blogs.msdn.microsoft.com/dotnet/2016/08/24/whats-new-in-csharp-7-0/ :

patterns. Initially:

and they can initially match in:

Note that the patterns bind variables that can be used later (for 'is expressions'; i guess this is like an implicit variable assignment or 'let') or in the enclosed scope (for switch/case).

Note: patterns (at least in switch/case clauses) do not match 'null' unless 'null' is put there explicitly

Note: in C# switch/case clauses are evaluated in order, except that 'default' is as if it's last. For Oot, probably better just to make it last.

Tuples. Note that C# tuples can be structs with mutable fields. C# tuple members can have names:

(string first, string middle, string last) LookupName(long id) // return tuple elements have names
{
    ... // retrieve first, middle and last from data storage
    return (first, middle, last); // tuple literal
}

var names = LookupName(id);
WriteLine($"found {names.Item1} {names.Item3}.");

Deconstruction/(destructuring bind). Not just for tuples; any (object?) can be deconstructed if it supports the 'deconstruction' (interface?): public void Deconstruct(out T1 x1, ..., out Tn xn) { ... }

Nested functions (with closures and upvalues it seems).

Returnable refs.

" Just like you can pass things by reference (with the ref modifier) in C#, you can now return them by reference, and also store them by reference in local variables.

public ref int Find(int number, int[] numbers) { for (int i = 0; i < numbers.Length; i++) { if (numbers[i] == number) { return ref numbers[i]; return the storage location, not the value } } throw new IndexOutOfRangeException?($"{nameof(number)} not found"); }

int[] array = { 1, 15, -39, 0, 7, 14, -12 }; ref int place = ref Find(7, array); aliases 7's place in the array place = 9; replaces 7 with 9 in the array WriteLine?(array[4]); prints 9

... There are some restrictions to ensure that this is safe:

Custom 'Task' types that can be returned from 'async' methods instead of 'Task'.

Expression bodied methods, properties, accessors, constructors and finalizers , eg public string Name {get => names[id]; set => names[id] = value;}

Throw as an expression in certain places, eg return (parts.Length > 0) ? parts[0] : throw new InvalidOperationException?("No name!");

---

so this (previous section) is a nice example for how 'references' can differ from pointer. Operationally, the Ref seems just like an (immutable) pointer. But, to change the thing it points to, you just did "place = 9", not "*place = 9".

Note that they can only be created pointing to (i) objects that were passed to you, or (b) into objects with an immutable size.

Makes we wonder: (a) how is this safe? What if you point to something in a mutable structure that is later mutated to be smaller, so that the pointer is no longer pointing into it? Well, i guess the answer is that they are restricting Refs to point into things that can't change their size; structs, and arrays (each array itself has a fixed size, although a C# Array is a POINTER to the underlying array, and so can be 'resized' by allocating a new array and changing the pointer). Note that although C# has an Array.Resize, this actually only creates a new Array and redirects the Ref that you passed in to point to the new array: http://stackoverflow.com/a/4840817/171761 (b) perhaps you could do a little pointer arithmetic safely, say by updating a pointer into an array to point at the next element within the same array. But how to keep this safe? Instead of just passing around a pointer, you'd want to pass around a 'descriptor' or 'fat pointer' which would contain bounds in addition to the pointer itself. If the underlying object were of mutable size, then instead of a descriptor, you'd want a pointer to the descriptor, so that if someone else resizes the target object, they can change the descriptor too; but in this case this additional layer of indirection wouldn't be needed, since the underlying object would be of fixed size.

---

for Oot, i can't decide between allowing pointers only into objects with immutable size, or representing them with a pointer to a descriptor. Maybe both are (transparently) allowed (the implementation would choose the former when possible)? Two small issues with the latter: (a) if one object may contain different subobjects, then there might be more than one child descriptor associated with it? i think that's avoidable though (if an object has a pointer to something else, just consider that a different object with its own, single, descriptor) (b) if the object is resized an existing pointer may become no longer valid, so you'd have to check these guys for NULL all the time. y'know though, checking those guys for NULL all the time will be annoying. Also, concurrency probs here. So mb only allowing pointers into objects of immutable size is the better here, because it's simpler.

---

so, for Oot, i'm thinking:

---

"

headmelted 9 hours ago [-]

Edit-and-Continue is the greatest productivity feature ever added to a development stack for the real conditions one would encounter in large projects, and it astonishes me even now how misunderstood it's purpose is.

Consulting has me moving jobs a lot and I encounter a veritable kaleidoscope of crappy work. Being able to correct minor issues in scope without restarting, while ten form posts deep in an archaic webforms app with dodgy "we-didnt-know-so-we-rolled-our-own" state management has been a lifesaver. Thank you nineties VB team, it's because of you that I'm not rocking back-and-forth in a padded cell making animal noises today.

reply

CuriousSkeptic? 3 hours ago [-]

If you've ever tried jrebel or even the default hot swap feature on the JVM the edit-and-continue feature feels like a bad joke though. Any non-trivial change requires a restart, and you are forced through a series of distracting button presses and dialogs any time you want to use it.

In Eclipse (for example) you just start typing code, (none of this silly locking the IDE while debugging stuff), and the debugger will update the running program instantly. With jRebel you can even change things structurally without restarting.

Oh how I would love an actually working hot swap feature for .Net.

reply "

---

daxfohl 15 hours ago [-]

All I want is easy syntax for immutable records, and C#-like a fluent syntax for .WithPhoneNumber?(phoneNumber) etc. Seems like it'd be a cinch to implement, and for me the single handiest feature from F#.

In F# these are "guaranteed" to be non-null, even though null instances come up during deserialization all the time. In C# use cases, I don't think the non-null "guarantee" should be made or implied, just a POCO.

reply

CyrusNajmabadi? 15 hours ago [-]

Hi there! C# language designer here.

You can see (and participate in) the discussion on records here if you'd like: https://github.com/dotnet/roslyn/issues/10154

> Seems like it'd be a cinch to implement

Ah... how i wish that were so :)

reply

daxfohl 15 hours ago [-]

Indeed. While records are my favorite feature from F#, they could be improved.

I'd love it if record definitions were extensible (even abstractly--sometimes you want a record with just the data, and sometimes including db-centric "id, createdTime, etc"), and there should be a way of defining / converting between them without a ton of boilerplate. That would allow something like:

record DbStuff? = { id: int; created: DateTime? }

record UserRecord?: DbStuff? { name: string; ... }

var userRecord = new UserRecord?(...)

var userData = userRecord.WithoutDbStuff?() but what would be the reflected name of this type?

var newRecord = userData.WithDbStuff?(3, DateTime?.Now)


Sometimes you want to be able to define records

PhotoData?{source, metadata, yada, etc, url} and

PhotoDisp?{source, metadata, yada, etc, bytes}

without the repetition, and again with an easy way to convert between the two. (And yes you could simplify the above by using containership but oftentimes there are cross-cutting things that containership doesn't solve. You really want a flattening solution.)

Easy integration with C# anonymous classes should be considered too.

C# has always been the more real-world-centric language so I'd hope these common use cases would be considered.

reply

JamesBarney? 2 hours ago [-]

I second this! The ability to easily create intersection types would be awesome!

reply

---

one guy's opinion on C# 7.0:

hvidgaard 11 hours ago [-]

I don't like the idea of accessing tuple values by their local name

    var names = LookupName(id);
    WriteLine($"found {names.first} {names.last}.");

It's leaky and should simply use the deconstructing syntax instead.

I'm on the fence about the ref returns. It can lead to some fantastic performance improvements and enable otherwise unusable datastructures. But is C# really a language you want if that is important to you. Why not go all in a use a lib in C or C++ for the critical parts of the code?

I still miss better immutability support, and probably most of all, ability to declare something at non-nullable.

But that said, the update is a fantastic update to an already good language.

reply

---

galfarragem 4 days ago [-]

Ordering languages and frameworks by amount of 'love' (edited with small adjustments):

---

example of compact Python:

    import sys
    from collections import Counter
    
    cnt = Counter(n.strip() for n in sys.stdin)
    
    for key, val in cnt.most_common():
        print(val, key)
	

[4] [5]

---

http://okigiveup.net/arguments-against-json-driven-development/

lmm 3 days ago [-]

The main reason this happens in Python is that creating actual datatypes is incredibly clunky (by Python standards) because of the tedious "def __init__(self, x): self.x = x". The solution here is to have a very lightweight syntax for more specific types, e.g. Scala's "case class".

I'd also argue for using thrift, protobuf or even WS-* to put a little more strong typing into what goes over the network. Such schemata won't catch everything (they have to have a lowest-common-denominator notion of type) but distributed bugs are the hardest bugs to track down; anything that helps you spot a bad network request earlier is well worth having.

reply

aeruder 3 days ago [-]

An article about the "attrs" library was posted here a couple weeks ago. Really highlighted the tedium of Python objects while offering a neat solution.

https://glyph.twistedmatrix.com/2016/08/attrs.html

Regarding protobuf, I'm a bit disappointed with the direction of version 3. Fields can no longer be marked as required - everything is optional; i.e. almost every protobuf needs to be wrapped with some sort of validator to ensure that necessary fields are present. I understand the arguments, but I did enjoy letting protobuf do the bulk of the work making sure fields were present.

reply

tantalor 3 days ago [-]

Required fields are bad; don't use them.

You should be very careful about marking fields as required. If at some point you wish to stop writing or sending a required field, it will be problematic to change the field to an optional field – old readers will consider messages without this field to be incomplete and may reject or drop them unintentionally. You should consider writing application-specific custom validation routines for your buffers instead.

https://developers.google.com/protocol-buffers/docs/proto#sp...

reply

tantalor 3 days ago [-]

Named tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. They can be used wherever regular tuples are used, and they add the ability to access fields by name instead of position index.

https://docs.python.org/2/library/collections.html#collectio...

reply

sevensor 3 days ago [-]

Named tuples are pretty great! I used to think their immutability was a drawback, but I'm starting to come around to the opposite point of view.

reply

joshmarlow 3 days ago [-]

I've not yet made use of it, but using the `type` keyword to create new classes quickly looks promising.

https://docs.python.org/3.5/library/functions.html#type

reply

mythz 3 days ago [-]

This isn't JSON-driven development, it's just choosing to apply logic over loose-typed data structures instead of named constructs. It's more awkward in Python because it doesn't have sugar syntax to index an object like JavaScript? has.

But using clean built-in data structures instead of named types has its benefits especially if you need to serialize for persistence of communication as it doesn't require any additional knowledge of Types in order to access serialized data, so you can happily consume data structures in separate processes without the additional dependency of an external type system that's coupled and needs to be carried along with your data.

This is why Redux uses vanilla data structures in its store or why JSON has become popular for data interchange, any valid JSON can be converted into a JavaScript? object with just `JSON.parse()` which saves a tonne of ceremony and manual effort then the old school way of having to extract data from data formats with poor programatic fit like an XML document into concrete types.

If your data objects don't need to be serialized or accessed outside of the process boundary than there's little benefit to using loose-typed data structures, in which case my preference would be using classes in a static type system to benefit from the static analysis feedback of using Types.

reply

---

    somefile = open("file.txt")
    line = somefile.readline()
    while line:
        print line
        line = somefile.readline()
    somefile.close()

But I'm still using some magic, I should be more explicit:

    def explicit_readline(fd):
        buff = []
        char = fd.read(1)
        while char != "\n" and char != "":
            buff.append(char)
        return "".join(buff)
    somefile = open("file.txt", "rb")  # just to be sure now
    line = explicit_readline(somefile)
    while line != "":  # to be more explicit, of course
        print line
        line = explicit_readline(somefile)
    somefile.close()

---

https://glyph.twistedmatrix.com/2016/08/attrs.html

---

" Once in a while it's interesting to step back and look at the whole mess. Are you so calloused that you cannot imagine it any better? Can you not imagine a world where one could operate in native code without the concept of volatile variables? Can you not imagine an operating system with a simple integrated IPC bus? Can you not imagine a system without signals that are executed asynchronously? Do you really believe that it cannot be done any other way?

Source: Ryan Dahl "

antirez 1793 days ago [-]

I agreed with the general ideas in its original rant, but IMHO the problem is not the POSIX API nor the C language (including "volatile"), those are both well designed and simple stuff, for the most part (the C standard library is horrid unfortunately).

IMHO most of the problems are about the other layers: tricks you need to know about the operating system implementation of POSIX, or dynamic library loading, all the subtle things with different binaries formats, and so forth.

A few of this things are easy to solve, for me it is impossible to understand how the libC can be in this sad state, and how the replacements and improvements to it like glib are also a mess. If one day I'll not hack on Redis anymore my mission will be, assuming I'll have another way to pay my bills, to create a replacement for the C standard library.

rphlx 1793 days ago [-]

POSIX is beautiful. d-bus != POSIX.

slaughterhaus 1793 days ago [-]

why does he keep mentioning dbus and glib? I havent been following node.js development and last time i checked in, node was written in c++.

---

kmm 1793 days ago [-]

What is so bad about the C standard library? What would you change?

I find it lacking a lot of essential features -- can you believe strdup isn't part of the C standard? -- but simplicity and minimalism has always been C's strongest point.

zeugma 1793 days ago [-]

Lack of real string type. String manipulation is a pain and you have to allocate everything by yourself which result in inefficient and dangerous code.

---

" Time-travel debugging and deterministic replay is such a fundamentally important feature, and it can expand our capabilities in many different ways that we're only starting to explore. Debugging is just one possible application; consider also exhaustive testing of error conditions, re-execution of optimistic transactions that hit a write conflict, temporal backtracking search over executions, data prevalence (though it doesn't solve the schema upgrade problem), and deterministic building. And remember that the hardest problem for reverse-mode automatic differentiation is figuring out how to "run the program backwards" in order to find the gradient of the output; deterministic replay strategies are directly applicable to this problem and therefore to generalized gradient descent. "

---

" Constraint programming allows you to write the specification of your program and then separately search for ways to fulfill that specification, either manually (by specifying search strategies/proof tactics) or automatically.

Constraint programming allows you to abstract away the question of which values in a subroutine are returned and which are provided as parameters. If you have only two such candidates, like the tired old °F↔°C example, this is a minor saving. If you have many parameters, it can be a major saving. "

---

javascript-ish language with static typing

http://www.cs.columbia.edu/~sedwards/classes/2016/4115-spring/proposals/JSJS.pdf

---

idea from http://doc.cat-v.org/bell_labs/structural_regexps/se.pdf : instead of

BEGIN{
y=1
}
/^/{
for(x=1; x<=length($0); x++)
if(substr($0, x, 1)=="#")
print "rect", x, y, x+1, y+1
y++
}

could have

BEGIN{
x=1
y=1
}
/ /{
x++
}
/#/{
print "rect", x, x+1, y, y+1
x++
}
/\n/{
x=1
y++
}

note that instead of replacement text, arbitrary code is run when each part of the pattern is matched.

---

mb 'refuse the temptation to guess, except when error handling'

---

"

The most important invention of the last decade, the World Wide Web, was not made by computer systems researchers...

The Failure of Systems Research u We didn’t invent the Web u Why not? Too simple – Old idea » But never tried – Wasteful » But it’s fast enough – Flaky » But it doesn’t have to work u Denial: It doesn’t scale – Only from 100 to 100,000,000 " -- [6]

" Challenges: Programming u Concurrency – Using 100 processors/chip – Matching biological concurrency » What can you do in 100 cycles? u Declarative – SQL, spreadsheets the only successes so far u Intelligence – Data models/class hierarchy/knowledge rep u Uncertainty – Real - world input: speech, vision, ... – Adapting to environment " -- [7]

---

http://danluu.com/butler-lampson-1999/

" Capabilities

Still a No. You can make a case that many things we have today are inspired by capabilities, but capabilities as they were originally presented still aren’t really on the horizon. I find capabilities to be interesting because a lot of the smartest people I know went through a phase when they were enamored of them, but everyone who’s tried to implement them seems to become disillusioned.

I’m not the only person to observe this – Lampson has mentioned elsewhere that none of the folks who worked with him on the CAL TSS system (perhaps the first capability based system with actual users) wanted to build capability based systems after that experience. In that interview, Lampson also talks about how, given the very large complexity burden, the ROI of capabilities was very low, not just on their system, but in other systems that he knows of, too. " -- [8]

(but see below; Lampson is talking about hardware capabilities for the purpose of reducing bugs, not for security against an adversary; in the context of reducing bugs, he just thinks they're too expensive for what they buy you; imo if you think of them in terms of security you might come to a different conclusion)

" Fancy type systems

It depends on what qualifies as a fancy type system, but if “fancy” means something at least as fancy as Scala or Haskell, this is a No. That’s even true if you relax the standard to an ML-like type system. Boy, would I love to be able to do everyday programming in an ML (F# seems particularly nice to me), but we’re pretty far from that.

In 1999 C, and C++ were mainstream, along with maybe Visual Basic and Pascal, with Java on the rise. And maybe perl, but at the time most people thought of it as a scripting language, not something you’d use for “real” development. PHP, Python, Ruby, and Javascript all existed, but were mostly used in small niches. Back then, Tcl was one of the most widely used scripting languages, and it wasn’t exactly widely used. Now, PHP, Python, Ruby, and Javascript are not only more mainstream than Tcl, but more mainstream than C and C++. C# is probably the only other language in the same league as those languages in terms of popularity, and Go looks like the only language that’s growing fast enough to catch up in the foreseeable future. Since 1999, we have a bunch of dynamic languages, and a few langauges with type systems that are specifically designed not to be fancy.

Maybe I’ll get to use F# for non-hobby projects in another 16 years, but things don’t look promising. " -- [9]

---

" Lampson talks about his work to build a capability machine, and expresses that despite considerable interest from some research scientists, this is not a fruitful path for computer security. ... our basic idea was again, not so much the kinds of things that people t end to think about today when they think about security, which is how do we keep the bad guys out , but more using capabilities as a tool for making the whole system more reliable, by bounding the bad effects of bugs. So our model wasn’t so much there’s som e evil group in Russia that’s trying to break into the system, as there are two user programs running and we want to make very sure that even though they’re communicating in fairly complicated ways, one of them can’t have any effect s on the other except the ones that are intended by the receiving party. ... the thing had been a bust , because the effort required to do the capabilities, and the cost that were incurred in performance and resources expended , were totally disproportionate to making it easier to find half a dozen bugs. And I believe that’s been pretty much the lesson of every attempt to introduce hardware capabilities. ... In spite of this, of course, capabilities have been very successful in operating systems, it’s just that they’re not called that, they’re called file descriptors. UNIX has them , and every major operating system has them. The difference is t hat they’re not the foundation of the security system, they’re basically a performance optimization; that’s slightly oversimplified , but once you open a file, you get a file descriptor for which the security is controlled some other way. You get this thing called a file descriptor, and that then gives you permission to do things to the file , and as long as it stays open, you can pass it off to other processes , and so on and so forth. That’s exactly a capability, but it’s not long -lived " -- [10]

---

imagist 5 days ago [-]

JavaScript? is missing some fundamental cryptographic primitives that make it impossible to write secure software with JS, most notably it lacks any cryptographically secure random number generator.

Additionally, package installs are insecure, so unless you have time to audit the entire `node_modules/` you can't actually guarantee that the code you want to run is the code you have installed. Additionally, because JS doesn't have any namespace restrictions, a vulnerability in any package allows access to the whole codebase.

For a survey of problems with JS (none of which have been fixed since this was written), take a look here: https://www.grc.com/sn/files/jgc-javascript-security.pdf

reply

---

this guy likes Node over Scala:

" When I was working all Scala, I was always looking to confirm my decision by reading positive reviews, reading about the switchers from Ruby, because I was spending huge amounts of time learning it. This was right up until the point that I switched to Node and could ship features 10x faster. "

---

" For instance, nulls are clearly a problem. After countless rants we're seeing languages that have better ways to deal with missing or optional values enter the mainstream (Swift, Scala, Rust).

J2EE's original idea to configure a system through separate XML files was heavily criticised for being too bureaucratic. After countless rants we got configuration by convention, better defaults and annotations as part of the source language. "

---

"Nothing that transcompiles into JavaScript? can fix JavaScript?'s lack of a native integer. "

---

https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#max_path-explanation-and-workarounds

" MAX_PATH explanation and workarounds

For the uninitiated, MAX_PATH is a limitation with many Windows tools and APIs that sets the maximum path character length to 260 characters. There are some workarounds involving UNC paths, but unfortunately not all APIs support it, and that's not the default. This can be problematic when working with Node modules because dependencies are often installed in a nested manner. "

---

lmm 8 days ago [-]

I use Scala with spray.io. Plain-old-code rather than annotations or config files (they've structured their API so that you can write a route definition that looks almost like a config file - see e.g. https://github.com/spray/spray/blob/release/1.3/examples/spr... - but it's all plain old code that you can refactor according to the normal rules of refactoring code). You can use callbacks if you really want but generally you just return Futures and compose them with for/yield, which is the sweet spot IMO: your code can look like:

    for {
      user <- loadUser(userId)
      bestFriendId = user.friendships.maxBy(_.strength).friendId
      bestFriend <- loadUser(friendId)
      response = buildResponse(user, bestFriend)
    } yield response

so there's very little overhead to doing async calls (just the difference between "<-" and "="), but you do have visibility over which calls are async and which aren't when you're reading the code. (async/await are arguably even better in this specific case, but they're tied to async, whereas for/yield is generic functionality that you can also use to e.g. manage database transactions).

JSON-wise if you use spray-json-shapeless you get automatically derived json formats at compile time, so better performance than reflection-based approaches, O(1) code to support all your different data types, but if you accidentally try to include something that doesn't make sense to serialize (e.g. a file handle) then you'll get an error at compile time. Similarly, the output "adapter" is resolved implicitly at compile time, so once you've imported the adapter for spray-json and the spray-json-shapeless stuff you can do "complete(foo)" where foo is a Future[MyCaseClass?] and it will work correctly, but if you try to complete with something that can't be converted to a valid HTTP response then you get the error at compile time.

(And if you're already using Java, it runs on the JVM, with seamless interop with Java libraries; you can run it as a servlet too, so you can migrate a web API one endpoint at a time (using servlet routing) if you like).

reply

---

parts of compiling are exactly the sort of parallelizable computation that Oot should be good at expressing in a parallel form; in this case, probably much of compiling is context-free transformations on trees (ASTs, etc), and graph construction and analysis (CFGs, etc)

we should make sure that the reference Oot implementation exposes this parallelizablity, so that a self-hosting Oot compiled for a parallel target works in parallel (this is important not so much for performance reasons as that it will encourage us to make the language better for this sort of thing, which is a design goal)

---

why no comments in JSON:

" I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have destroyed interoperability. I know that the lack of comments makes some people sad, but it shouldn't.

Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin before handing it to your JSON parser. "

---

whorleater 4 days ago

parent flag favorite on: Why bad scientific code beats code following “best...

Disclosure: I'm a recent astronomy grad who specialized in computational astrophysics. Definitely biased.

The issue is that at least for many scientists and mathematicians, mathematical abstraction and code abstraction are topics that oftentimes run orthogonal to each other.

Mathematical abstractions (integration, mathematical vernacular, etc) are abstractions hundreds of years old, with an extremely precise, austere, and well defined domain, meant to manage complexity in a mathematical manner. Code abstractions are recent, flexible, and much more prone to wiggly definitions, meant to manage complexity in an architectural manner.

Scientists often times have already solved a problem using mathematical abstractions, e.g. each step of the Runge-Kutta [1] method. The integrations and function values for each step is well defined, and results in scientists wanting to map these steps one-to-one with their code, oftentimes resulting in blobs of code with if/else statements strewn about. This is awful by software engineering standards, but in the view of the scientist, the code simply follows the abstraction laid out by the mathematics themselves. This is also why it's often times correct to trust results derived from spaghetti code, since the methods that the code implements themselves are often times verified.

Software engineers see this complexity as something that's malleable, something that should be able to handle future changes. This is why it code abstractions play bumper cars with mathematical abstractions, simply because mathematical abstractions are meant to be unchanging by default, which makes tools like inheritance, templates, and even naming standards poorly suited for scientific applications. It's extremely unlikely I'll ever rewrite a step of symplectic integrators [2], meaning that I won't need to worry about whether this code is future proof against architectural changes or not. Functions, by and large in mathematics, are meant to be immutable.

Tl; dr: Scientists want to play with Hot Wheels tracks while software engineers want to play with Lego blocks.

[1]: https://en.wikipedia.org/wiki/Runge–Kutta_methods

[2]: https://en.wikipedia.org/wiki/Symplectic_integrator

---

thinkpad20 199 days ago

parent [-]on: C++ is a hack

Isn't name mangling a standard thing done by just about every language except for C that compiles to object code? The lack of standardization between compilers sounds potentially problematic, but name mangling itself doesn't seem like a bad thing. The alternative is to force the programmer to do their own "name mangling" a la C.

JoeAltmaier? 199 days ago [-]

Name mangling is a sad old way of using antiquated object formats while moving forward with a language. Better solutions are obvious - a new object format that stores constraints/annotations/scope info would be a good one.

---

an interesting perspective on C++ being (not) too big:

lfowles 199 days ago

parent favorite on: C++ is a hack

I appreciate it being multi-paradigm as well. There are 3-4 languages in there with _great_ interoperability. As well as the ability to pull in a C library to a project with little fuss.

---

on lambda calc and why it's interesting:

" LC is a language with three constructors

    var[x]         reference a name
    lam[x](...)    create a scope where x is *known*
    app(f, v)      when f is lam[x], give 
                   meaning to x within its body

Forget ideas of functions or anything else you know from CS. Just think about what these three constructors mean. To give them real meaning we have to define one more thing: the reduction step. It's what you think it is

    app(lam[x](body), value) ==> body{x -> value}

where the right side means that we replace all instances of "x" within "body" with "value".

So imagine that to be the definition of a programming language---the whole thing. It seems a bit silly. It doesn't obviously have any of

It doesn't even seem to have any way of talking about how it instructs a machine on what to do. It's truly austere.

So the magic, of course, is that it actually does have all of the above. Those 3 constructors and one rule are powerful enough already to generate all of that. "

---

"Generalized references":

kiiski 46 days ago

parent favorite on: Lisp: it's not about macros, it's about READ (2012...

The basic idea is to have a uniform way of setting values to variables or calling a setter method. That's very helpful for writing macros that need to both read and set a value.

For example, the `+=`-operator found in many languages is implemented as a macro in CL (INCF). It needs to read the value of a place, increment it and then set the new value to the same place.

Doing that with a variable is easy. Using non-lisp syntax

  x += 5 

expands to

  x = x + 5

Doing that with object slots is easy too, if you don't mind accessing the slot directly. However, if you want to have getter and setter methods (which may not even correspond directly to a slot in the object) the expansion would look different.

  o.setFoo(o.getFoo() + 5)

or if the getter and setter can have the same name

  o.foo(o.foo() + 5)

Since the expansion is different, the macro would have to figure out whether it's dealing with a variable or an accessor method and behave differently. With generalized references you can use the accessor method as if it was a variable

  o.foo()      // calls the getter
  o.foo() = 5  // calls the setter

Now you easily can expand

  o.foo() += 5

to

  o.foo() = o.foo() + 5

just the same as you would with a variable. Behind the scenes you still have separate getter and setter methods, but the macro doesn't need to worry about that.

tel 46 days ago

parent [-]on: Lisp: it's not about macros, it's about READ (2012...

I feel like functional lenses have vastly improved upon this idea now by (a) being first class, (b) being completely composable, and (c) being general enough to include notions of access into sum types and having multiple targets.

---