proj-oot-ootLibrariesNotes10

functions and libraries etc should come with at least a little bit of semantic labels, e.g. data race freedom

http://urwid.org/

in rust the STD library is always available and doesn't need to be put in your cargo.toml however it does still need to be imported with use

---

https://github.com/weld-project/weld/tree/master/python/grizzly/grizzly https://www.weld.rs/grizzly https://pypi.python.org/pypi/pygrizzly/0.0.1 Grizzly is a subset of the Pandas data analytics library integrated with Weld

https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/groupbyweld.py https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/numpy_weld.py https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/dataframeweld.py https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/seriesweld.py

https://www.weld.rs/weldnumpy WeldNumpy? is a subset of the NumPy? numerical computing framework integrated with Weld

https://medium.com/dunder-data/minimally-sufficient-pandas-a8e67f2a2428

---

Avi-D-coder 3 months ago [-]

Haskell without lens, text, vector, etc... is a bit like rust with only core not std.

The haskell standard library is tiny. Libraries like lens are not optional. In practice you won't understand any open source Haskell without rudimentary understanding of lens. I get why parser libraries were banned, but excluding lens, vector, and text?

I like Rust a lot, but haskell minus it's more advanced type system is just Rust plus GC. Lets not pretend this is a fair comparison of languages when it's primarily a comparison of standard libraries.

 orbifold 3 months ago [-]

That's a total stretch, lens is not used in GHC for example and lots of other smaller compilers written in Haskell. It is used in Ermine but that is stuck in a semi complete state for a while now and Ekmett has moved on.

runeks 3 months ago [-]

I second this.

I’ve written tens of thousands of lines of Haskell, and I’ve never used lens. Also, putting it in the same category as text and vector doesn’t make sense — these are indeed unavoidable, and practically all my projects use them.

smichael 3 months ago [-]

Thirded. No lens in pandoc (50k lines of haskell), darcs (40k), most hledger packages (15k).

sbergot 3 months ago [-]

This is why I gave up on Haskell. Lens works as advertised, but is a pain to learn and to use in practice: the abstraction is tough to grasp and it is hard to form an intuition about it. The compilation errors are laughingly esoteric. The number of adhoc squwiggly operators is ridiculous. You also need to understand a lot of language extensions to get how the type checking works.

To me it looks like an impressive proof of concept for a future programming language based around it. ...

foldr 3 months ago [-]

Lenses don't really give you anything that you can't get from (a) a sensible syntax for record updates and (b) intrusive pointers. Lenses only exist because of Haskell 98's uniquely bad support for record types. Record access and update in most other languages just is simpler.

fmap 3 months ago [-]

Lenses are more than reified record labels though. There is a hierarchy of concepts that can be freely composed based on which features your data structure actually supports. In particular, lenses can be composed with traversals ("lenses" pointing at multiple fields) yielding "LINQ" like features without introducing new syntax or concepts.

The main problem with lenses is that common lens libraries look extremely complicated at first glance and seem to be solving a very simple problem. That rightfully puts most people off of learning what all the fuss is about.

0xab 3 months ago [-]

If you use lens as just a way to access records like you do in other languages, then there is absolutely nothing hard about it. Literally all you need to know is:

Name your records like "data Prefix = Prefix { prefixFieldName :: ... }" call "makeFields Prefix" once at the bottom of your file and use "obj ^. fieldName" to access and "obj & fieldName .~ value" to set.

That's it. You now have 100% of the capabilities of record update in any other language. This doesn't get any simpler in any other language. It even pretty much looks like what you would do in other languages.

I'll grant you, Haskell and lens do a terrible job of explaining subsets of functionality that are simple and let you get the job done before jumping in the deep end.

foldr 3 months ago [-]

Yeah, so it's a less good way of accessing record fields than the one present in 99% of other programming languages. Your own description makes this plain. Let's compare to Javascript:

What bugs me is when Haskell advocates try to use all the additional esoteric features of the lens library as an excuse for this fundamental baseline crappiness.

Haskell really just needs proper support for record types. Then people could use lenses when they actually need lenses (never?). At the moment, they're using lenses because they want something that looks almost like a sane syntax for record updates.

tathougies 3 months ago [-]

Record types are not a solution to the problem lens solves. Lens is a good library and a good concept. If we spent some time on it in programming class, most people would get it. When moving to non-Haskell languages, the lack of proper lenses is something I notice almost immediately.

foldr 3 months ago [-]

I know what the lens library does - I write Haskell for my day job.

In practice, the main reason people use it is to work around the deficiencies of Haskell's built-in record system:

>I never built fclabels because I wanted people to use my software (maybe just a bit), but I wanted a nice solution for Haskell’s non-composable record labels.(http://fvisser.nl/post/2013/okt/11/why-i-dont-like-the-lens-...)

The other features of lenses don't strike me as particularly useful. YMMV. I'd also question the quality of the library. It's full of junk like e.g. http://hackage.haskell.org/package/lens-4.17.1/docs/src/Cont..., which is just an invitation to write unreadable code.

tathougies 3 months ago [-]

My biggest use case for lenses that I miss in other languages is the ability to interact with all elements of a collection, or elements in deeply nested collections.

For example, if I had a list of records with a field named 'categories' holding a list of objects with a field named 'tags', and I wanted to get all of these names in one list, without nested loops, lens makes it easy 'record ^.. categories . each . tags . each' or I could update them all, etc. It's just so easy to do this kind of data munging with lens that writing fors, whiles, etc in other languages is painful.

 jpittis 3 months ago [-]

> I think the smaller differences are also large enough to rule out extraordinary claims, like the ones I’ve read that say writing a compiler in Haskell takes less than half the code of C++ by virtue of the language

Specifically the "by virtue of the language" part:

Seems to me like it's unreasonable to claim the languages are on equal footing because fancy parser libraries aren't allowed to be used for the project. The fancy parser libraries exist for certain languages specifically because the languages enable them to be written. (For example in Haskell: monadic libaries, libraries that take advantage of GADTs, etc.)

trishume 3 months ago [-]

I don't think monadic parser libraries have a real claim to be that difference. All the languages listed have excellent parsing libraries that make things similarly easy, if not by language power than by grammar DSL with embeddable code snippets.

I think if any library could make a real difference for Haskell it's most likely to be http://hackage.haskell.org/package/lens, which a Haskeller friend of mine claims could likely make a lot of the AST traversal and rewriting much terser.

pwm 3 months ago [-]

While I found your article informative and interesting I think it only works in the very specific context of this assignment. Disallowing powerful language features/libraries means it's not a level playing field and thus not a fair comparison. Some languages standard libraries are tiny some are huge. Some languages have lots of advanced features. Eg. GP mentioned GADTs with which one can write type safe/correct by construction ASTs. In other words programs passing specific tests in a specific context does not imply they are comparable in terms of general correctness/robustness/maintainability (as you noted this re caught edge cases).

howenterprisey 3 months ago [-]

Hoopl (data flow analysis) would also make a difference. I did a very similar project at my university in Haskell and Hoopl definitely saved us from writing quite a bit of code. We also used parser combinators in the frontend, which I think saved us time too.

anaphor 3 months ago [-]

I've found PEGs (Parsing Expression Grammars) to make things extremely easy and terse. E.g. OMeta, Parsley, etc.

My experience with using both PEGs and parser combinators is that there isn't a huge difference in the total number of lines of code. On the other hand though, the syntax of PEGs would be easier to understand for someone who is familiar with BNF style notation.

pyrale 3 months ago [-]

Recoding a viable subset of lens would have taken 50 locs in haskell. Likewise, rewriting parser combinators would not have taken long for experienced devs. The problem here is that requiring people to recode the libs on top of the compiler is disingenuous. And if you ban idiomatic libs, you also ban most online help, tutorials, etc.

loup-vaillant 3 months ago [-]

(A suitable subset of) Parsec is about 100 lines of OCaml. Implementing a PEG syntax on top of it is about 150 lines of Haskell (or less, I'm a Haskell noob).

Building up the knowledge to get to this point however… nope, those students were better off going hand written recursive descent (or Lex/Yacc, since an equivalent was allowed).

https://github.com/LoupVaillant/Monokex/blob/master/src/pars...

http://loup-vaillant.fr/projects/metacompilers/nometa-haskel...

steveklabnik 3 months ago [-]

My understanding is that in production compilers, hand rolled parsers are the norm. Parsing libraries are cool, but just aren’t used for big projects.

sanxiyn 3 months ago [-]

Both OCaml and GHC use parser generators. It is incorrect to suggest production compilers hand roll parsers.

steveklabnik 3 months ago [-]

Two counterexamples does not disprove “a norm”. There are always exceptions!

tomasato 3 months ago [-]

Excluding the lens library (as per the article) is unusual, it provides natural getter/setter and row polymorphism type functionality.

More anecdotally, I’d argue parsing libraries are common, just look at the prevalence of attoparsec and others. But most parsing libraries in the ecosystem are parser combinator libraries which don’t support as performance and nice error messages that compilers need

garmaine 3 months ago [-]

That was where I stopped reading. If a library like lens—used by nearly every haskeller in every project—was disallowed, I don’t know what the purpose of this exercise was.

---

https://internals.rust-lang.org/t/calculating-which-3rd-party-crates-are-good-candidates-for-std-inclusion-via-left-pad-index/11129

Patrick Walton (pcwalton) Idea: The “left-pad index, a score for Rust crates that combines small size with popularity. bascule I went ahead and crunched the numbers for crates.io 4, surveying the top 500 crates by recent downloads, dividing that number by the crate size, and coming up with the following results:

Per Patrick, here are some good candidates for potential inclusion in std: twitter.com 1 Patrick Walton (pcwalton) @bascule Neat! As I suspected, matches, atty, cfg-if, lazy_static, memoffset, scopeguard, nodrop could all easily be in the stdlib.

Ixrec 27d

I agree that any metric which includes transitive clients is going to have its usefulness quickly demolished by the "oh, crossbeam used it" problem. ...

Let's try reverse_deps / crate_size ^ 2:

    matches
    lazy_static
    failure_derive
    atty
    phf_codegen
    hex
    phf
    strum
    cfg-if

RazrFalcon? 1 27d

It doesn't seem like a good metric. From crates on the screenshot, only matches and cfg-if are std worthy, imho. And both of them should be implemented as a language feature anyway.

Personally, log is my main request. Also arrayvec/smallvec. byteorder is also a very popular.

UPD: I almost forgot about language-level bitflags. The current implementation isn't user-friendly (IDE's cannot expand macros yet, so it breaks autocomplition).

scottmcm 27d

    My personal favourite microcrate is matches .

Agreed, to the point that we keep having conversations about it being a feature with real syntax (one proposal was x is Some(_), for example.)

Would still be worth contemplating putting assert_matches! in std, though, the same way we have assert_eq! even with == syntax...

        try_from

Good to see we've at least made progress one some of them :slight_smile: synek317 27d

Having some experience with multiple 'mainstream' languages, I was pretty surprised that following crates are not in the std:

    log
    rand
    lazy_static or something similar

Other good candidates:

    syn, quote, proc_macro2 - must haves to create any proc macro,
    regex,
    itertools - it provides a lot of goodies but sometimes I decide to write code in more ugly way just because I'm too lazy to add the dependency or I want to reduce build time,
    derive more

Also, I don't understand why there is std::time and crate time. And chrono, or at least parts of chrono, seems to be good candidate. bascule 27d

    For example, the table above contains lazy_static. It has a lot of dependencies, because it solves a specific and quite a common problem. However, I find the once-cell more elegant and ergonomic. It has less reverse dependencies mostly because it is younger (I think).

I would agree just copying and pasting lazy_static into std as-is would be a bad idea. That said, you're missing the forest for the trees: it isn't so much that we should just outright copy and paste these crates into std, but rather these crates provide features which might be good candidates for first-class std features.

In the case of lazy_static, improving Rust's core support for static data, e.g. static initializers, first-class heap-allocated statics, and associated statics, are common topics on these forums. The problem of static data, in particular, is a problem that can be solved much more elegantly and powerfully at the language level rather than at the library level, and could potentially interact with things like the trait system or program startup. Centril 27d

    syn, quote, proc_macro2 - must haves to create any proc macro,

This would amount to freezing the syntax of the language itself.

    Also arrayvec / smallvec .

I'm personally in favor of ArrayVec?<...> using const generics on nightly because they are sort-of a vocabulary type at least for a language like Rust.

    Would still be worth contemplating putting assert_matches! in std , though, the same way we have assert_eq! even with == syntax...

Imo assert!(let A = expr && ...); seems strictly more flexible.

    itertools - it provides a lot of goodies but sometimes I decide to write code in more ugly way just because I'm too lazy to add the dependency or I want to reduce build time,

We have imported some stuff from itertools over time. Would be worth going over again to see if there are some more things we could add.

    derive more

Could definitely see this; adding more built-in derives for standard library traits seems sensible if obvious structural implementations can be given.

    In the case of lazy_static , improving Rust's core support for static data, e.g. static initializers, first-class heap-allocated statics, and associated statics, are common topics on these forums. The problem of static data, in particular, is a problem that can be solved much more elegantly and powerfully at the language level rather than at the library level, and could potentially interact with things like the trait system or program startup.

I think it would take some convincing for me to be comfortable with baking in support for (and thereby encouraging) what are essentially global singletons into the language itself since that is often a code-smell and hacks around better architectures. Most of the times I've used lazy_static! I've come to regret it later.

    first-class heap-allocated statics

There are plans :slight_smile: (Me and Oliver should probably write an RFC at some point...) RustConf? 2019 - Taking Constant Evaluation to the Limit by Oliver Schneider

    and associated statics

...are generally wanted but these would allow generic statics and those do generally not mix with dylibs (which many want to ditch eventually...).

CAD97 Regular 27d

...

27d

        Would still be worth contemplating putting assert_matches! in std , though, the same way we have assert_eq! even with == syntax...
    Imo assert!(let A = expr && ...); seems strictly more flexible.

assert!(expr == expected) is strictly more flexible than assert_eq!(expr, expected) as well, yet we still have assert_eq! because it can give more useful errors than assert!(==). I think the same applies to assert_matches! and assert!(let =). bascule 27d

    There are plans :slight_smile: (Me and Oliver should probably write an RFC at some point...)

Neat! Looking forward to seeing it.

pcwalton 1 27d

    Quickly about log : I don't believe that that the style of logging we see now, as exemplified by log , will be the dominant form of how we emit instrumentation from applications and libraries in the decently-near future. I think it'll probably be far closer to OpenTelemetry or tracing .

For servers, maybe. But for those of us who are, say, writing low-level graphics code, those crates are total overkill. I don't want to learn a fancy logging system which has no benefit to the code that I'm actually writing right now; I just want to use log so that I can get printfs that actually work on Android. If log were to go away, then I'd probably migrate to just open-coding calls to __android_log_print instead of using a heavyweight logging framework, which would obviously be of no benefit.

kornel 1 26d

I think another aspect that's missing is whether these crates are the dominant and polished solutions to their respective problems, so that when std picks one "winner", most users will be happy with it.

For example, cfg-if, atty, and num_cpus seem to have one uncontroversial solution, and if they were in std, I don't think anyone would mind.

But OTOH logging is a big enough problem that it can be solved in multiple ways, and there are multiple crates competing. log works for Patrick, but I write servers, and it causes headaches for me, so that crate is not one-size-fits-all.

---

https://docs.rs/crossbeam/0.7.3/crossbeam/ Tools for concurrent programming.

---

https://taylor.fausak.me/2019/11/16/haskell-survey-results/#s2q5

" Which language extensions would you like to be enabled by default?

Multiple select. 48% 578 OverloadedStrings? 40% 489 LambdaCase? 34% 414 DeriveGeneric? 29% 356 DeriveFunctor? 29% 355 GADTs 28% 338 BangPatterns? 26% 312 FlexibleInstances? 25% 302 FlexibleContexts? 25% 302 ScopedTypeVariables? 25% 298 RankNTypes? 24% 293 DeriveFoldable? 23% 273 GeneralizedNewtypeDeriving? 22% 269 TypeApplications? 21% 251 TypeFamilies? 21% 250 DeriveTraversable? 20% 245 DataKinds? 19% 235 TupleSections? 19% 227 MultiParamTypeClasses? 18% 213 DerivingVia? 18% 212 TypeOperators? 17% 207 KindSignatures? 15% 178 DerivingStrategies? 15% 176 DeriveDataTypeable? 14% 172 MultiWayIf? 14% 168 ViewPatterns? 14% 164 StandaloneDeriving? 13% 163 ConstraintKinds? 13% 161 DeriveAnyClass? 13% 160 RecordWildCards? 13% 157 EmptyCase? 12% 144 ApplicativeDo? 12% 141 FunctionalDependencies? 11% 139 ExplicitForAll? 11% 135 InstanceSigs? 11% 128 GADTSyntax 10% 125 PatternSynonyms? 10% 122 NamedFieldPuns? 10% 120 NumericUnderscores? 212% 2563 Other

This is everyone’s favorite question. As you can see, there’s a long tail of extensions that people would like to be enabled by default. I only included things above that got at least 10% of the vote. OverloadedStrings? is always the most popular, but it actually got more popular this year. LambdaCase? continues to hang out in second place, followed by some extensions related to deriving and GADTs. My read of this is that people want some quality of life improvements without having to manually enable them, either in their Cabal file or in every source file. "

---

according to

https://2019.stateofjs.com/testing/

jest is a beloved js testing framework:

https://jestjs.io/

---

according to

https://2019.stateofjs.com/other-tools/

common js libraries (about 25% usage) are:

vscode is the most common text editor for js programming

webpack is the most common build tool, followed by gulp

python is the most common other language used by js devs (25%), followed by C# (15%)

---

" C++ has a monolithic standard library with an amazing set of cool stuff in it (because, as I noted in the last section, of when it was written). However, the library embeds some important assumptions. In particular, it is written for a "normal" C++ execution environment, which for our purposes means two things:

    There is a heap, and it's okay to allocate/free whenever.
    Exceptions are turned on.

In most high-reliability, hard-real-time embedded environments, neither of these statements is true. We eschew heaps because of the potential for exhaustion and fragmentation; we eschew exceptions because the performance of unwinding code is unpredictable and vendor-dependent2. 2

There are also C++ programmers who avoid exceptions for religious reasons. I'm not among them; I have no objections to their existence, but I wish unwinding happened in predictable time.

Now, there are parts of the C++ standard library that you can use safely in a no-heap, no-exceptions environment. Header-only libraries like type_traits are probably fine. Simple primitive types like atomic are ... probably fine?

I keep saying "probably" because the no-heap, no-exception subset of the C++ standard is not clearly defined. (The C++ standards folk have, in fact, resisted doing this, arguing that it would fragment the language; this ship has most definitely sailed.) As a result, it's really easy to accidentally introduce a heap dependency, or to accidentally use an API that can't indicate failure when exceptions are disabled (like std::vector::push_back).

The Rust standard library has a critical difference: it's divided into two parts, std and core. std is like the C++ equivalent. core, on the other hand, is how std itself is implemented, and doesn't assume the existence of things like "the heap," threads, and the like. While code depends on std by default, you can set an attribute, no_std, to request only core.

This is a tiny design decision with huge implications:

    By setting the #[no_std] attribute on a crate, you're opting out of the default dependency on std. Any attempt to use a feature from std is now a compile time error3 — but you can still use core.
    You can trust other crates to do the same, so you can use third-party libraries safely if they, too, are no_std. Many crates are either no_std by default, or can have it enabled at build time.
    core is small enough that porting it to a new platform is easy -- significantly easier, in fact, than porting newlib, the standard-bearer for portable embedded C libraries."

---

https://doc.rust-lang.org/core/index.html

---

https://github.com/real-logic/agrona

---

https://github.com/AndreaOrru/zen/blob/master/kernel/syscall.zig

---

" In general, unfair locking can get so bad latency-wise that it ends up being entirely unacceptable for larger systems. But for smaller systems the unfairness might not be as noticeable, but the performance advantage is noticeable, so then the system vendor will pick that unfair but faster lock queueing algorithm.

(Pretty much every time we picked an unfair - but fast - locking model in the kernel, we ended up regretting it eventually, and had to add fairness). "

"

" Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).

But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).

...

But at least that was a conceptually very simple model for doing locking: you create what is basically a counting semaphore by initialize a pipe with N characters (where N is your parallelism for the semaphore), and then anybody who wants to get the lock does a one-byte read() call and anybody who wants to release the lock writes a single byte back to the pipe. "

---

strenholme 1 day ago [-]

I’m already seeing a lot of discussion both here and over at LWN about which hash algorithm to use.

The Git team made the right choice: SHA2-256 is the best choice here; it has been around for 19 years and is still secure, in the sense that there are no known attacks against it.

Both BLAKE[2/3] and SHA-3 (Keccak) have been around for 12 years and are both secure; just as BLAKE2 and BLAKE3 are faster reduced round variants of BLAKE, Keccak/SHA-3 has the official faster reduced round Kangaroo12 and Marsupilami14 variants.

BLAKE is faster when using software to perform the hash; Keccak is faster when using hardware to perform the hash. I prefer the Keccak approach because it gives us more room for improved performance once CPU makers create specialized instructions to run it, while being fast enough in software. And, yes, SHA-3 has the advantage of being the official successor to SHA-2.

reply

---

https://awesomekling.github.io/pledge-and-unveil-in-SerenityOS/

---

hyc_symas 1 day ago [-]

The standard string library is still pretty bad. This would have been a much better addition for safe strcpy.

Safe strcpy

    char *stecpy(char *d, const char *s, const char *e)
    {
     while (d < e && *s)
      *d++ = *s++;
     if (d < e)
      *d = '\0';
     return d;
    }
    main() {
      char buf[64];
      char *ptr, *end = buf+sizeof(buf) ;
      ptr = stecpy(buf, "hello", end);
      ptr = stecpy(ptr, " world", end);
    }

Existing solutions are still error-prone, requiring continual recalculation of buffer len after each use in a long sequence, when the only thing that matters is where the buffer ends, which is effectively a constant across multiple calls.

What are the chances of getting something like this added to the standard library?

reply

pascal_cuoq 1 day ago [-]

For what it's worth, I personally like this approach, because there are some cases in which it requires less arithmetic in order to be used correctly. And it lends itself better to some forms of static analysis, for similar reasons, in the following sense:

There is the problem of detecting that the function overflows despite being a “safe” function. And there is the problem of precisely predicting what happens after the call, because there might be an undefined behavior in that part of the execution. When writing to, say, a member of a struct, you pass the address of the next member and the analyzer can safely assume that that member and the following ones are not modified. With a function that receives a length, the analyzer has to detect that if the pointer passed points 5 bytes before the end of the destination, the accompanying size it 5, if the pointer points 4 bytes before the end the accompanying size is 4, etc.

This is a much more difficult problem, and as soon as the analyzer fails to capture this information, it appears that the safe function a) might not be called safely and b) might overwrite the following members of the struct.

a) is a false positive, and b) generally implies tons of false positives in the remainder of the analysis.

(In this discussion I assume that you want to allow a call to a memory function to access several members of a struct. You can also choose to forbid this, but then you run into a different problem, which is that C programs do this on purpose more often than you'd think.)

reply

msebor 1 day ago [-]

There are many improved versions of string APIs out there, too many in fact to choose from, and most suffer from one flaw or another, depending on one's point of view. Most of my recent proposals to incorporate some that do solve some of the most glaring problems and that have been widely available for a decade or more and are even parts of other standards (POSIX) have been rejected by the committee. I think only memccpy and strdup and strdndup were added for C2X. (See http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2349.htm for an overview.)

reply

AceJohnny?2 1 day ago [-]

> Most of my recent proposals [...] have been rejected by the committee.

Does anyone have insight on why?

reply

saagarjha 1 day ago [-]

memccpy is a very welcome addition in the front of copying strings; what else were you thinking of proposing?

reply

saagarjha 1 day ago [-]

I recently looked at a number of string copying functions, as well as came up with an API a bit similar to yours: https://saagarjha.com/blog/2020/04/12/designing-a-better-str... (mine indicates overflow more clearly). memccpy, which is coming in C2X, makes designing these kinds of things finally possible.

reply

---

https://medium.com/@peternorvig/ive-consed-every-pair-54ef5d9d93b6

I’ve used deftype, ftype, machine-type, define lisp-type, typecase, check-type, read-line class-name, class-of, defclass, load-time base-string, write-string, string-trim, run-time make-list, make-hash, make-node, string-make make-array, display, two-way, for Pete’s sake

I’ve used arrayp, boundp, minusp, iterator constantp, equalp, typep, numerator bit-nor, vector, ffloor, butlast, special or, xor, err-or, broadcast, truncate, conjugate, concatenate, package-error, allocate, update, random-state, what a terror.

---

nicoburns 5 days ago [–]

QT is excellent, but C++ is quite a tough pill to swallow for many. Especially as QT layers a macro system on top. I predict that native desktop apps will make a comeback when there's QT-quality cross-platform framework in a more approachable language (Rust, nim, or similar).

reply

_630w 5 days ago [–]

There are a few options.

https://github.com/revery-ui/revery

https://github.com/briskml/brisk

reply

soraminazuki 5 days ago [–]

Why not use Qt bindings for $YOUR_LANGUAGE_OF_CHOICE? https://wiki.qt.io/Language_Bindings

reply

mjevans 4 days ago [–]

It's rather clunky and often requires writing like C++ in whatever language of choice you're using, the worst of both worlds.

I wonder what an API with both only C (or some other low level) bindings and designed to be easy to use externally might look like.

reply

kdot 3 days ago [–]

Flutter for desktop solves this.

reply

catblast 4 days ago [–]

If you're using KDE, Qt is "native native".

You're fundamentally mistaken about where Qt sits in the stack - it effectively sits in the same place as USER32/WinForms? in Windows or NS/Cocoa GUI widgets of OSX. It is reasonable to think of it as an alternative native GUI library in that sense. If it is slower, it's because an implementation of something is slower, not because of where it lives or an abstraction cost.

Qt pretty much draws using low-level drawing APIs on the respective platform. And although Qt itself is not written in the most performance sensitive C++, it is still orders of magnitude faster than most (and it's not like Chrome doesn't pay overhead) - people rag on vtable dispatch speed but jeez its still orders of magnitude faster than something like ObjC? which served Apple quite well for years.

The performance of a Qt app is more likely a function of the app itself and how the app developers wrote it.

But no, you're not noticing any micro-seconds differences in C++ overhead for Qt over "native native" - and you're basically comparing the GUI code of the platform - since Qt does it's own rendering. Win32 is mostly pretty good, NS is a mixed bag, and Gtk+ is basically a slug. In all cases there is some kind of dynamic dispatch going on, because that is a fundamental pattern of most GUI libraries. But dynamic dispatch is almost never a factor in GUI render performance. Things like recalculating sizes for 1 million items in a table on every repaint are the things that get people into trouble, and that is regardless of GUI library.

reply

jcelerier 5 days ago [–]

a few reasons :

Of course you can make Qt look deliberately non-native if you want, but by default it tries its best - see https://code.woboq.org/qt5/qtbase/src/plugins/platforms/coco... and code such as https://code.woboq.org/qt5/qtbase/src/plugins/platforms/wind...

reply

irishcoffee 5 days ago [–]

Knowing what I know about Qt and what I've done with it in my day job, it's basically the best kept secret on hn. What they're doing with 6+licensing... I'm not sure how I feel, but from a pure multi-platform framework it really is the bees knees.

I've taken c++ qt desktop apps that never had any intention of running on a phone, built them, ran them, everything "just worked. I was impressed.

reply

---

https://cloudabi.org/

---

printf is way too powerful/complicated:

https://github.com/carlini/printf-tac-toe

---

https://gcanti.github.io/fp-ts/

---

" 1. Learn Key Syscalls

You should know what the following 12 key system calls do, which you'll see regularly in strace output. Test your knowledge! How many of these do you know? Mouse-over for answers. syscall what it does read write open close fork exec connect accept stat ioctl mmap brk

Click here to reveal all. Each syscall has a man page, so if you are at the command line, it should only take a few seconds to jog your memory.

There are variants of these syscalls, so you may see "execve" for "exec", and "pread" as well as "read". There should be man pages for these, too. "

---

we could look at the oldest syscalls in Linux, but there are quite a lot of them. This is from Linux 0.01 from 1991:

" fn_ptr sys_call_table[] = { sys_setup, sys_exit, sys_fork, sys_read, sys_write, sys_open, sys_close, sys_waitpid, sys_creat, sys_link, sys_unlink, sys_execve, sys_chdir, sys_time, sys_mknod, sys_chmod, sys_chown, sys_break, sys_stat, sys_lseek, sys_getpid, sys_mount, sys_umount, sys_setuid, sys_getuid, sys_stime, sys_ptrace, sys_alarm, sys_fstat, sys_pause, sys_utime, sys_stty, sys_gtty, sys_access, sys_nice, sys_ftime, sys_sync, sys_kill, sys_rename, sys_mkdir, sys_rmdir, sys_dup, sys_pipe, sys_times, sys_prof, sys_brk, sys_setgid, sys_getgid, sys_signal, sys_geteuid, sys_getegid, sys_acct, sys_phys, sys_lock, sys_ioctl, sys_fcntl, sys_mpx, sys_setpgid, sys_ulimit, sys_uname, sys_umask, sys_chroot, sys_ustat, sys_dup2, sys_getppid, sys_getpgrp,sys_setsid}; " -- https://github.com/zavg/linux-0.01/blob/5839d67d5825265fc665c9dc0ec2e767ff47a6dd/include/linux/sys.h or from http://www.oldlinux.org/Linux.old/kernel/0.00/linux-0.01/include/linux/

you can see which syscalls were in the (later) version 1.0 of Linux in the syscalls 2 man page: https://man7.org/linux/man-pages/man2/syscalls.2.html

this intro to syscalls lists some: "

     GENERAL CLASS              SPECIFIC CLASS                 SYSTEM CALL
     ---------------------------------------------------------------------
     File Structure             Creating a Channel             creat()
     Related Calls                                             open()
                                                               close()
                                Input/Output                   read()
                                                               write()
                                Random Access                  lseek()
                                Channel Duplication            dup()
                                Aliasing and Removing          link()
                                Files                          unlink()
                                File Status                    stat()
                                                               fstat()
                                Access Control                 access()
                                                               chmod()
                                                               chown()
                                                               umask()
                                Device Control                 ioctl()
     ---------------------------------------------------------------------
     Process Related            Process Creation and           exec()
     Calls                      Termination                    fork()
                                                               wait()
                                                               exit()
                                Process Owner and Group        getuid()
                                                               geteuid()
                                                               getgid()
                                                               getegid()
                                Process Identity               getpid()
                                                               getppid()
                                Process Control                signal()
                                                               kill()
                                                               alarm()
                                Change Working Directory       chdir()
     ----------------------------------------------------------------------
     Interprocess               Pipelines                      pipe()
     Communication              Messages                       msgget()
                                                               msgsnd()
                                                               msgrcv()
                                                               msgctl()
                                Semaphores                     semget()
                                                               semop()
                                Shared Memory                  shmget()
                                                               shmat()
                                                               shmdt()
     ----------------------------------------------------------------------
     [NOTE:  The system call interface is that aspect of UNIX that has
     changed the most since the inception of the UNIX system.  Therefore,
     when you write a software tool, you should protect that tool by putting
     system calls in other subroutines within your program and then calling
     only those subroutines.  Should the next version of the UNIX system
     change the syntax and semantics of the system calls you've used, you
     need only change your interface routines.]" -- http://www.di.uevora.pt/~lmr/syscalls.html

here's an OS course presentation on syscalls that mentions some:

" Some POSIX System Calls pid = fork() Create child processpid = waitpid(pid, &statloc, options) Wait for child to terminates = execve(name, argv, environp) Replace process’s imageexit(status) Terminate processfd = open(file, how, ...) Open file for read/writes = close(fd) Close open filen = read(fd, buffer, nbytes) Read data from file into buffern = write(fd, buffer, nbytes) Write data from buffer to filepos = lseek(fd, offset, whence) Move file pointers = stat(name, &buf) Get file’s status informations = mkdir(name, mode) Create new directorys = rmdir(name) Remove empty directorys = link(name1, name2) Create link to files = unlink(name) Remove directory entrys = mount(special, name, flag) Mount file systems = umount(special) Unmount file systems = chdir(dirname) Change working directorys = chmod(name, mode) Change file’s protection bitss = kill(pid, signal) Send signal to a processsecs = time(&seconds) Get elapsed time since 1/1/70 ... In the next assignment, you must implementIopen,read,write,lseek,close,dup2 fork,_exit chdir,getcwd getpid execv,waitpid" -- https://www.cs.hmc.edu/~geoff/classes/hmc.cs134.201209/slides/class12_syscalls_beamer.pdf

---

we could look at the core functionalities of GNU Hurd but it's quite a lot:

https://en.wikipedia.org/wiki/GNU_Hurd#Architecture_of_the_servers

Viengoos, an unfinished rewrite of Hurd, is slightly simpler but still complex: https://www.gnu.org/software/hurd/microkernel/viengoos/documentation/reference-guide.pdf

i think Plan 9's core might be simpler?

---

plan 9's syscalls:

Plan 9 System Calls

" All of 'em (excluding obsolete ones).

Files open open an existing file create create a new file or open an existing file for writing pread read from an open file pwrite write to an open file chdir change current directory seek change the current position in an open file close close a file descriptor dup duplicate a file descriptor fd2path retrieve file name stat read file metadata fstat read open file metadata wstat write file metadata fwstat write open file metadata remove delete a file

Process management rfork change process attributes or create a new process exec replace the current process exits terminate the current process errstr exchange error string sleep sleep a given amount of time

Synchronization await wait for a child process to terminate pipe create a pipe rendezvous exchange a word of data semacquire acquire a semaphore semrelease release a semaphore

Memory management brk_ allocate memory segattach attach to or create a segment segdetach detach from a segment segfree free physical memory segbrk change segment length segflush flush cache

Namespace management mount mount a 9P connection bind bind a file or directory unmount unmount or remove a bind

9P connections fversion initialize a 9P connection fauth initiate authentication

Notes alarm set the alarm timer notify set the note handler noted continue after note " -- https://aiju.de/plan_9/plan9-syscalls

wow that does look simpler!

---

https://aiju.de/code/misc/tiny

"

The Tiny Unix Tools

The goal is to find the shortest implementation of common UNIX tools. Neither errors nor success are signaled; "meteor proofness" is not required. Evil C tricks are encouraged, submissions welcome.

rm / unlink (no flags or anything):

main(a,b)charb;{while(*++b)unlink(*b);}

cat:

main(a,b)charb;{while(*++b){open(*b,0);while(read(3,&a,1)>0)write(1,&a,1);close(3);}}

ed:

main(a){for(;;){read(0,&a,1);write(1,"?\n",2);}}

ls: (thanks to cls)

  1. include <dirent.h> main(){DIR*d=opendir(".");struct dirent*e;while(e=readdir(d))puts(e->d_name);}

kill: (thanks to ente)

main(a,b)charb;{kill(atoi(b[2]),atoi(b[1]));}

sh: (thanks to nortti)

main(){char a[256],*b,*c[256],d;int p;while(1){write(1,"$",2);for(b=a;*(b-1)!='\n';b++){read(0,b,1);}*b=0;d=c;*d++=a;for(b=a;b<a+256&&*b!=0;b++){if(*b==' '

*b=='\n'){*b=0;*d=b+1;d++;}}*(d-1)=0;if(!(p=fork()))execvp(a,c);else wait(p);}}

wc: (thanks to TLH) char c,i;long l,w,b;main(){while(read(0,&c,1)){b++;if(c=='\n')l++;if(!isspace(c))i=1;else if(i){i=0;w++;}}if(i)w++;printf("\t%ld\t%ld\t%ld\n",l,w,b);} "

---

https://aiju.de/misc/languages says "I really like C, despite its weak spots. The most annoying problem is the lack of a good standard library. Plan 9 fixes that."

so we should look at plan 9's stdlib:

http://doc.cat-v.org/plan_9/programming/c_programming_in_plan_9

" The C library consists of several parts:

∙ All the Plan 9 system calls (save for a few that only the library uses)

∙ A set of subroutines to facilitate using the system calls

∙ The formatted print routines

∙ Mathematical functions

∙ Time functions

∙ Functions for working with Unicode characters, or Runes "

https://9fans.github.io/plan9port/man/man3/intro.html

" INTRO(3) INTRO(3)

NAME intro – introduction to library functions

SYNOPSIS #include <u.h>

  1. include any Unix headers
  2. include <libc.h>
  3. include <auth.h>
  4. include <bio.h>
  5. include <draw.h>
  6. include <fcall.h>
  7. include <frame.h>
  8. include <mach.h>
  9. include <regexp.h>
  10. include <thread.h>

DESCRIPTION This section describes functions in various libraries. For the most part, each library is defined by a single C include file, such as those listed above, and a single archive file containing the library proper. The name of the archive is /usr/local/plan9/lib/libx.a, where x is the base of the include file name, stripped of a leading lib if present. For example, <draw.h> defines the contents of library /usr/local/plan9/lib/libdraw.a, which may be abbreviated when named to the loader as −ldraw. In practice, each include file contains a magic pragma that directs the loader to pick up the associated archive automatically, so it is rarely necessary to tell the loader which libraries a program needs; see 9c(1). The library to which a function belongs is defined by the header file that defines its interface. The ‘C library’, libc, contains most of the basic subroutines such as strlen. Declarations for all of these functions are in <libc.h>, which must be preceded by (needs) an include of <u.h>. The graphics library, draw, is defined by <draw.h>, which needs <libc.h> and <u.h>. The Buffered I/O library, libbio, is defined by <bio.h>, which needs <libc.h> and <u.h>. The ANSI C Standard I/O library, libstdio, is defined by <stdio.h>, which needs <u.h>. There are a few other, less commonly used libraries defined on individual pages of this section. The include file <u.h>, a prerequisite of several other include files, declares the architecture-dependent and -independent types, including: uchar, ushort, and ulong, the unsigned integer types; schar, the signed char type; vlong and uvlong, the signed and unsigned very long integral types; Rune, the Unicode character type; u8int, u16int, u32int, and u64int, the unsigned integral types with specific widths; jmp_buf, the type of the argument to setjmp and longjmp, plus macros that define the layout of jmp_buf (see setjmp(3)); and the macros va_arg and friends for accessing arguments of variadic functions (identical to the macros defined in <stdarg.h> in ANSI C). Plan 9 and Unix use many similarly-named functions for different purposes: for example, Plan 9’s dup is closer to (but not exactly) Unix’s dup2. To avoid name conflicts, <libc.h> defines many of these names as preprocessor macros to add a p9 prefix, so that dup becomes p9dup. To disable this renaming, #define NOPLAN9DEFINES before including <libc.h>. If Unix headers must be included in a program, they should be included after <u.h>, which sets important preprocessor directives (for example, to enable 64-bit file offsets), but before <libc.h>, to avoid renaming problems.

Name space Files are collected into a hierarchical organization called a file tree starting in a directory called the root. File names, also called paths, consist of a number of /-separated path elements with the slashes corresponding to directories. A path element must contain only printable characters (those outside the control spaces of ASCII and Latin-1). A path element cannot contain a slash. When a process presents a file name to Plan 9, it is evaluated by the following algorithm. Start with a directory that depends on the first character of the path: / means the root of the main hierarchy, and anything else means the process’s current working directory. Then for each path element, look up the element in the directory, advance to that directory, do a possible translation (see below), and repeat. The last step may yield a directory or regular file.

File I/O Files are opened for input or output by open or create (see open(3)). These calls return an integer called a file descriptor which identifies the file to subsequent I/O calls, notably read(3) and write. The system allocates the numbers by selecting the lowest unused descriptor. They are allocated dynamically; there is no visible limit to the number of file descriptors a process may have open. They may be reassigned using dup(3). File descriptors are indices into a kernel resident file descriptor table. Each process has an associated file descriptor table. In threaded programs (see thread(3)), the file descriptor table is shared by all the procs. By convention, file descriptor 0 is the standard input, 1 is the standard output, and 2 is the standard error output. With one exception, the operating system is unaware of these conventions; it is permissible to close file 0, or even to replace it by a file open only for writing, but many programs will be confused by such chicanery. The exception is that the system prints messages about broken processes to file descriptor 2. Files are normally read or written in sequential order. The I/O position in the file is called the file offset and may be set arbitrarily using the seek(3) system call. Directories may be opened like regular files. Instead of reading them with read(3), use the Dir structure-based routines described in dirread(3). The entry corresponding to an arbitrary file can be retrieved by dirstat (see stat(3)) or dirfstat; dirwstat and dirfwstat write back entries, thus changing the properties of a file. New files are made with create (see open(3)) and deleted with remove(3). Directories may not directly be written; create, remove, wstat, and fwstat alter them. Pipe(3) creates a connected pair of file descriptors, useful for bidirectional local communication.

Process execution and control A new process is created when an existing one calls fork(2). The new (child) process starts out with copies of the address space and most other attributes of the old (parent) process. In particular, the child starts out running the same program as the parent; exec(3) will bring in a different one. Each process has a unique integer process id; a set of open files, indexed by file descriptor; and a current working directory (changed by chdir(2)). Each process has a set of attributes -- memory, open files, name space, etc. -- that may be shared or unique. Flags to rfork control the sharing of these attributes. A process terminates by calling exits(3). A parent process may call wait(3) to wait for some child to terminate. A bit of status information may be passed from exits to wait. On Plan 9, the status information is an arbitrary text string, but on Unix it is a single integer. The Plan 9 interface persists here, although the functionality does not. Instead, empty strings are converted to exit status 0 and non-empty strings to 1. A process can go to sleep for a specified time by calling sleep(3). There is a notification mechanism for telling a process about events such as address faults, floating point faults, and messages from other processes. A process uses notify(3) to register the function to be called (the notification handler) when such events occur.

Multithreading Where possible according to the ANSI C standard, the main C library works properly in multiprocess programs; malloc, print, and the other routines use locks (see lock(3)) to synchronize access to their data structures. The graphics library defined in <draw.h> is also multi-process capable; details are in graphics(3). In general, though, multiprocess programs should use some form of synchronization to protect shared data. The thread library, defined in <thread.h>, provides support for multiprocess programs. It includes a data structure called a Channel that can be used to send messages between processes, and coroutine-like threads, which enable multiple threads of control within a single process. The threads within a process are scheduled by the library, but there is no pre-emptive scheduling within a process; thread switching occurs only at communication or synchronization points. Most programs using the thread library comprise multiple processes communicating over channels, and within some processes, multiple threads. Since I/O calls may block, a system call may block all the threads in a process. Therefore, a program that shouldn’t block unexpectedly will use a process to serve the I/O request, passing the result to the main processes over a channel when the request completes. For examples of this design, see ioproc(3) or mouse(3).

SEE ALSO nm(1), 9c(1)

DIAGNOSTICS Math functions in libc return special values when the function is undefined for the given arguments or when the value is not representable (see nan(3)). Some of the functions in libc are system calls and many others employ system calls in their implementation. All system calls return integers, with –1 indicating that an error occurred; errstr(3) recovers a string describing the error. Some user-level library functions also use the errstr mechanism to report errors. Functions that may affect the value of the error string are said to “set errstr”; it is understood that the error string is altered only if an error occurs. "

https://git.suckless.org/9base/file/lib9/libc.h.html

" Just a clarification: the Go compilers statically link against pieces of the plan9 libc, but the binaries generated for Go programs do not. " -- https://www.reddit.com/r/golang/comments/b8dgr/curious_what_the_relationship_among_go_plan_9/

" Opening a network connection in Unix

  1. include <sys/types.h>#include <sys/socket.h>#include <netinet/in.h>#include <netdb.h>...struct sockaddr_in sock_in;struct servent *sp;struct hostent *host;...memset(&sock_in, 0, sizeof (sock_in));sock_in.sin_family = AF_INET;f = socket(AF_INET, SOCK_STREAM, 0);if (f < 0)error("socket");if (bind(f, (struct sockaddr*)&sock_in, sizeof sock_in) < 0){error("bind");host = gethostbyname(argv[1]);if(host){sock_in.sin_family = host->h_addrtype;memmove(&sock_in.sin_addr, host->h_addr, host->h_length); ...}else{sock_in.sin_family = AF_INET;sock_in.sin_addr.s_addr = inet_addr(argv[1]);if (sock_in.sin_addr.s_addr == -1)error("unknown host %s", argv[1]);}sp = getservbyname("discard", "tcp");if (sp)sock_in.sin_port = sp->s_port;elsesock_in.sin_port = htons(9);if (connect(f, (struct sockaddr*)&sock_in, sizeof sock_in) < 0){error("connect:");
 Tedious and clunky Protocol-specific details leak thru the API C-specific, hard to access from elsewhere without ugly wrapping

Opening a network connection In Plan 9

  1. include <u.h>#include <libc.h>...fd = dial(netmkaddr(argv[1], "tcp", "discard"), 0, 0, 0);if(fd < 0)sysfatal("can’t dial %s: %r", argv[1]); Clean and simple Abstracts the protocol details from the application, addresses " -- https://docs.huihoo.com/plan9/Plan9.pdf

--- "On Unix, Unix-like and other POSIX-compliant operating systems, popular system calls are open, read, write, close, wait, exec, fork, exit, and kill.

...

Categories of system calls

System calls can be grouped roughly into six major categories:[12]

    Process control
        create process (for example, fork on Unix-like systems, or NtCreateProcess in the Windows NT Native API)
        terminate process
        load, execute
        get/set process attributes
        wait for time, wait event, signal event
        allocate and free memory
    File management
        create file, delete file
        open, close
        read, write, reposition
        get/set file attributes
    Device management
        request device, release device
        read, write, reposition
        get/set device attributes
        logically attach or detach devices
    Information maintenance
        get/set time or date
        get/set system data
        get/set process, file, or device attributes
    Communication
        create, delete communication connection
        send, receive messages
        transfer status information
        attach or detach remote devices
    Protection
        get/set file permissions" -- https://en.wikipedia.org/wiki/System_call

i already googled for minimal syscalls minimal tui


so far i like some combination of the following as models for a small set of key syscalls that should be supported (obviously each of these has a bunch of stuff that we will not support, too):

---

https://www.lexaloffle.com/pico8_manual.txt

---

" Crates we rely on

We're not going to cat Cargo.toml here, instead focusing on some select crates that have earned the prestigious award of a lifetime invitation to each of our birthday parties forever. "Better-than-std" crates

    crossbeam is better for inter-thread communication than std::sync::mpsc in almost every way, and may be merged into std eventually.
    parking_lot has a mutex implementation better than std::sync::Mutex in almost every way, and may be merged into the standard library (one day). It also provides many other useful synchronization primitives.
    bytes is a more robust, and often more performant, way to play with bytes compared to Vec<u8>.
    socket2 is what you will end up at if you are ever doing lower-level networking optimizations.

Beauty supply

    fern is a dead-simple way to customize and prettify your logging output. We use it to keep our logs readable and internally standardized.
    structopt is how you always dreamed CLI arguments would be handled. There's no reason not to use it unless you're going for bare-minimum dependencies.

Cargo cult classics

    cargo-release allows us to cut internal releases painlessly.
    cargo-udeps identifies unused dependencies and allows us to keep our build times minimal.
    cargo tree (recently integrated in cargo) shows a dependency tree that's useful in many ways, but mainly in identifying ways to minimize dependencies.
    cargo-geiger helps us quickly evaluate external dependencies for possible security (or correctness) concerns.
    cargo-flamegraph helps us enormously when tracking down performance hot-spots in our code."

---

some unix shell utils:

paste timeout

JdeBP? 3 months ago

parent favorite on: The most surprising Unix programs

I would guess, from experience of people doing things the hard way, at:

And, given what you just wrote:

AdieuToLogic? 3 months ago [–]

A handy one to add to your list is:

seq[0]

EDIT: I didn't see that you had already included 'jot' so removed it from my original reply.

0 - https://www.freebsd.org/cgi/man.cgi?query=seq&apropos=0&sekt...

JdeBP? 3 months ago [–]

I didn't include it because in my experience people don't overlook seq, whereas they do overlook jot. I only included things where I've encountered people overlooking such tools and doing things the hard way.

---

https://rosettacode.org/wiki/Terminal_control/Cursor_positioning

---

some tui commands in here eg. i see DocClear? in demos a lot:

https://templeos.holyc.xyz/Wb/Adam/DolDoc/DocRecalcLib.html#l120

some graphics commands eg. draw a line, i think:

https://templeos.holyc.xyz/Wb/Adam/Gr/GrPrimatives.html#l770

here's an example of the cursor being moved: https://templeos.holyc.xyz/Wb/Adam/DolDoc/DocChar.html#l21 looks like the real stuff is: " doc->cur_col=cc; doc->cur_entry=doc_ce; "

and mb: https://templeos.holyc.xyz/Wb/Adam/WinMgr.html#l315

---

" Much of the Zig standard library is ported from musl

musl is such a high quality codebase, that most of the Zig standard library's interface to Linux is a direct port of musl code.

This has prevented countless bugs and made things "just work" in general. Without this head start, the Zig project would have had to spend more time on Linux system interface and less on everything else.

For example, thanks to Marc Tiehuis contributions, the Zig standard library has all the math functions you would expect to find in libm, and they are available at compile-time as well as runtime. "

---

ord chr

---

https://www.google.com/search?client=ubuntu&channel=fs&q=concurrency+primitives&ie=utf-8&oe=utf-8

---

https://impurepics.com/posts/2018-09-02-concurrency-primitives.html

---

https://github.com/tauri-apps/tauri

API

    setTitle - set the window title
    command - make custom API interfaces
    execute - STDOUT Passthrough with command invocation
    open - open link in a browser
    event - two part api consisting of emit and listen
    httpRequest - command rust to make an http request
    openDialog - native file chooser dialog
    saveDialog - native file saver dialog
    readDir - list files in a directory
    createDir - create a directory
    removeDir - remove a directory
    removeFile - remove a file
    renameFile - rename a file
    copyFile - copy a file to a new destination
    writeFile - write file to local filesystem
    writeBinaryFile - write binary file to local filesystem
    readBinaryFile - read binary file from local filesystem
    readTextFile - read text file from local filesystem

---

https://blogs.igalia.com/compilers/2020/06/23/dates-and-times-in-javascript/

---

https://blogs.igalia.com/compilers/2020/06/23/dates-and-times-in-javascript/ https://news.ycombinator.com/item?id=23781819

oefrha 1 day ago [–]

TL;DR:

Actual spec: https://tc39.es/proposal-temporal/

Less formal docs: https://tc39.es/proposal-temporal/docs/

Examples: https://tc39.es/proposal-temporal/docs/cookbook.html

You can try Temporal in the JS console on a doc page.

reply

I think this is a great proposal and a huge step in the right direction for JS. I am curious though, is there a reason not to just essentially duplicate the Joda[0]/Java[1]/ThreeTen?[2] API? As far as I understand, they are generally considered a gold standard as far as datetime APIs.

Is it too Java-y that it wouldn't make sense to port to JS? Are there copyright implications?

The JS Temporal proposal _does_ as far as I can tell, share many of the underlying fundamental concepts, which is great, but then confusingly has some types, such as `LocalDateTime?`, which mean the exact opposite of what they do in the well-known Java API [3].

There is still discussion going on about these details, but from my perspective it seems like the best thing would be to just copy the Java naming conventions exactly.

[0]: https://www.joda.org/joda-time/

[1]: https://docs.oracle.com/javase/8/docs/api/java/time/package-...

[2]: https://www.threeten.org/

[3]: https://github.com/tc39/proposal-temporal/issues/707

reply

TheCoelacanth? 1 day ago [–]

This. They already copied the crappy date API from Java. Why not copy the good one too?

reply

salmonellaeater 1 day ago [–]

There is already a JS port of Joda[1] which works well and, crucially, uses all the same names and concepts. Could we just replace the Temporal proposal with this and save a lot of work and confusion?

[1] https://js-joda.github.io/js-joda/

reply

leothekim 1 day ago [–]

+1 to going off of Joda API. This is a very well-thought out datetime API with a fluent interface for building date/time objects in an immutable way. Just doing this would be a huge step improvement for JS.

reply

bxparks 1 day ago [–]

I find that the Java 8 java.time API [1] easier to understand than Joda Time [2] when working with timezones. In particular, the OffsetDateTime? and ZonedDateTime? classes in java.time seem well-designed and easy to use. The equivalents in Joda Time are harder for me.

[1] https://docs.oracle.com/javase/8/docs/api/java/time/package-...

[2] https://www.joda.org/joda-time/apidocs/index.html

reply

shadowmatter 1 day ago [–]

Agreed. Joda had the right abstractions (instants, durations, etc) but the class hierarchy for them was unnecessarily complex. A lot of this complexity comes from opting for the abstractions to be either mutable or immutable.

For example, `ReadableInstant?` [1] in Joda implements 3 interfaces and has 7 subclasses. And really, what is the difference between `AbstractDateTime?` and `BaseDateTime?`? Whereas `Instant` from java.time [2] is an immutable value type and I haven't found it lacking in any respect.

On the whole java.time has struck me as extremely well designed (after coming from Python and previous date and time libraries in Java) and I think it would behoove other languages to liberally copy its design.

[1] https://www.joda.org/joda-time/apidocs/org/joda/time/Readabl...

[2] https://docs.oracle.com/javase/8/docs/api/java/time/Instant....

reply

foepys 1 day ago [–]

Considering that JodaTime? was more or less the reference for java.time, I would be very disappointed if java.time was worse and didn't get rid of JodaTime?'s design flaws.

reply

masklinn 1 day ago [–]

> Considering that JodaTime? was more or less the reference for java.time

It would probably be more correct to say that java.time is Joda version 2. Colebourne, the original author of Joda, was also one of the leads on JSR-310, and very much intended that 310 learn from the mistakes on Joda.

cactus2093 1 day ago [–]

I don't think it's actually that important that every language deals with them the exact same way. I interpret that to mean they all translate the APIs as literally as possible between them, using the same method names, same types of arguments, etc. Different languages all have their own quirks and patterns and it's also useful that the library be idiomatic within the language.

Most of the problems I have seen in dealing with time, across various languages, arise from allowing types to be way too ambiguous. A lot of code uses interchangeably things like "UTC -0700" and "America/Los_Angeles" which are actually very different (one incorporates daylight savings and one doesn't). The important fix is that languages start using reasonable abstractions and are stricter about requiring explicit conversions between types rather than trying to implicitly guess an interpretation.

The first library I've seen get this right was Joda in Java, and it later got included into the standard library for Java. It's been copied other languages, and I've used it on the frontend a bit as JSJoda, but the literal translation makes it a bit awkward in JS and I found it was hard to get other JS developers to prefer it over the more popular moment or date-fns. Temporal looks great, it actually seems to borrow heavily from Joda but doesn't literally copy all of the names or methods so hopefully it will be the best of both worlds.

reply

reply

mattwad 1 day ago [–]

It's not just that. Timezones are the real bugger. Microsoft has their own list of names separate from the standard IANA timezones, for example. If it weren't for timezones, IMO dates are pretty damn trivial. It's just a matter of storing and transferring them properly (in UTC, always!)

reply

frenchyatwork 1 day ago [–]

Timestamps are damn trivial. When people use dates, that's all they want 90% of the time. The remaining 10% is where all the hard problems are.

For example, lets say I have a store that opens from 8:00-17:00 every day, and it's currently 22:00. How long is it till it opens next?

reply

noja 1 day ago [–]

Three days and an hour more than you think, because it's the weekend, and an extra day because the public holiday for tomorrow may or not be a working day. Oh and there was a timezone change.

reply

bonedangle 1 day ago [–]

And it's December 31st.. Still shocked by how many times I see a year rollover go bad.

reply

aarong11 1 day ago [–]

There was a leap second too, you forgot that one!

reply

tigershark 22 hours ago [–]

No, they are not trivial at all as I explained in my previous comment. For more sauce: https://codeblog.jonskeet.uk/2019/03/27/storing-utc-is-not-a... https://codeblog.jonskeet.uk/2019/03/27/storing-utc-is-not-a-silver-bullet/amp/ About your question it’s impossible to answer without further context. There may be DST tonight, or a leap second, or the enforcement of some strange treaty that will change the time zone..

reply

1-more 1 day ago [–]

There is a case to not store a time in UTC: you schedule a meeting in the future and the local authorities change when they start DST in that location. Now your user gets the alert an hour too early or late for their meeting. The scenario is more fleshed out in the link below but I don't want to just copy and paste the article into the comment.

http://www.creativedeletion.com/2015/03/19/persisting_future...

reply

tigershark 22 hours ago [–]

No they are absolutely not. Ask Jon Skeet for example:

https://codeblog.jonskeet.uk/2010/12/01/the-joys-of-date-tim...

https://codeblog.jonskeet.uk/2019/03/27/storing-utc-is-not-a...

Or look at the usual falsehood list:

https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b...

Some interesting ones:

A week (or a month) always begins and ends in the same year.

Months have either 28, 29, 30, or 31 days.

There is a leap year every year divisible by 4.

The day before Saturday is always Friday.

reply

deathanatos 1 day ago [–]

> in UTC, always!

… which the proposal doesn't use. (It uses POSIX.)

reply

balfirevic 1 day ago [–]

> I can't speak for every language you listed, but C# has great and clear datetime builtin.

If you think built-in .NET DateTime? is any good, give NodaTime? a try. Standard DateTime? is barely usable, compared to it.

For JavaScript?, there is excellent js-joda library (same design as NodaTime? and Joda-Time), but it's 43 kB minified and compressed, which is... not terrible but also not great, really depends on your use case if it's worth it.

reply

balfirevic 10 hours ago [–]

> Went to the website to see some example code.. looks basically the same?

The main difference is that there are separate types for the following concepts:

1) A point in time in global timeline

2) A specific date and time in specific time zone

3) Specific date and time but without any information about time zone (and two more related types that contain only date information or only time information).

There are a few more types, but these illustrate the main difference from .NET DateTime?. In case you missed it, the basic concepts are explained here: https://nodatime.org/3.0.x/userguide/concepts (which I should have linked to in the first place).

This blog post by Jon Skeet explains the troubles with DateTime?: https://blog.nodatime.org/2011/08/what-wrong-with-datetime-a... (he's also one of the authors of NodaTime?).

reply

---

summary of the above:

https://www.joda.org/joda-time/ is well-regarded, copy that. The new proposal mostly copies it and is pretty good as well:

Actual spec: https://tc39.es/proposal-temporal/

Less formal docs: https://tc39.es/proposal-temporal/docs/

Examples: https://tc39.es/proposal-temporal/docs/cookbook.html

---

https://github.com/magnars/dash.el

---

windows seems to have API calls rather than syscalls (rather, the true syscalls are within the NT kernel which is not supposed to be directly accessible to user code, i think; it is said that much of the NT kernel calls are wrapped by kernel32.dll though).

this page has a concice listing of some win32 core API calls from (a presumably old version of) kernel32.dll:

https://www.plantation-productions.com/Webster/Win32Asm/win32API.html

still looking for data on which of these are most frequent (so that we know which system functionality is so core that we need to provide it in our syscalls in the bootstrapping implementations). i guess i'd like a static analysis. i did google searches like Kernel32.dll most frequent static analysis win32, but no dice.

this looks useful:

https://caiorss.github.io/C-Cpp-Notes/WindowsAPI-cpp.html

https://www.geoffchappell.com/studies/windows/win32/kernel32/api/index.htm presumably filter for the '3.51 and higher' and 'documented' entries

of course then we aren't getting the 'latest and greatest' functions which may have mostly replaced the older ones but with better apis. Could also look at the newer UWP.

---

deno "Minimizes core API size, while providing a large standard library with no external dependencies.":

https://doc.deno.land/builtin/stable

deno has "A standard library, modeled after Go's standard library, was created in November 2018 to provide extensive tools and utilities, partially solving Node.js' dependency tree explosion problem.":

https://deno.land/std@0.64.0 https://github.com/denoland/deno/tree/master/std

---

emacs libs:

dash.el s.el

---

https://juliadata.github.io/DataFrames.jl/stable/

"

akdor1154 1 day ago [–]

I find the IndexedTables? (the underlying dataframe-like structure) model to be nicer to work with DataFrames?.jl . It's much narrower API but it's well designed enough that this is a good thing, imo. The codebase is also a fair bit smaller.

It also uses strongly typed tables (e.g. Table<int, string> etc), whereas DataFrames? is loosely typed. Again I think this is a good decision (though it does grate with the Julia JIT's property of "being slow" the first time you run a function on a new type)

Finally its split into IndexedTables? and NDSparse is again a good design decision that I have not seen replicated in any other dataframe library.

It just seems all around better designed.

On the other hand it is verging on being unmaintained.

reply " (IndexTables? is part of the JuliaDB? project, i think)

---

pfalcon 69 days ago [–]

> pycopy had some minor improvements over MicroPython?'s asyncio module

Well, I was the author of "uasyncio" module, as was used with MicroPython?. When I switched to Pycopy, I took that module with me.

> The improvements to the old implementation are significant.

Well, how to put it. Before I wrote uasyncio, I wrote asyncio_slow. And before I wrote that, I watched dozens of people writing their own async frameworks for Python. And I watch dozens of people doing the same as we speak. The reason I embarked to write my piece is because I wasn't satisfied with how other people do it. I may imagine you feel the same. So, good luck. (If you do it right, I'll use your stuff.)

> but I'd be shocked if the asyncio implementation was better

It's better, where "better" defined as "more minimalist". If you're not interested in minimalism, then you as well can use CPython's asyncio (and CPython itself).

-- https://news.ycombinator.com/item?id=23466740

---