Difference between revision 8 and current revision
No diff available.functions and libraries etc should come with at least a little bit of semantic labels, e.g. data race freedom
in rust the STD library is always available and doesn't need to be put in your cargo.toml however it does still need to be imported with use
---
https://github.com/weld-project/weld/tree/master/python/grizzly/grizzly https://www.weld.rs/grizzly https://pypi.python.org/pypi/pygrizzly/0.0.1 Grizzly is a subset of the Pandas data analytics library integrated with Weld
https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/groupbyweld.py https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/numpy_weld.py https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/dataframeweld.py https://github.com/weld-project/weld/blob/master/python/grizzly/grizzly/seriesweld.py
https://www.weld.rs/weldnumpy WeldNumpy? is a subset of the NumPy? numerical computing framework integrated with Weld
https://medium.com/dunder-data/minimally-sufficient-pandas-a8e67f2a2428
---
Avi-D-coder 3 months ago [-]
Haskell without lens, text, vector, etc... is a bit like rust with only core not std.
The haskell standard library is tiny. Libraries like lens are not optional. In practice you won't understand any open source Haskell without rudimentary understanding of lens. I get why parser libraries were banned, but excluding lens, vector, and text?
I like Rust a lot, but haskell minus it's more advanced type system is just Rust plus GC. Lets not pretend this is a fair comparison of languages when it's primarily a comparison of standard libraries.
orbifold 3 months ago [-]
That's a total stretch, lens is not used in GHC for example and lots of other smaller compilers written in Haskell. It is used in Ermine but that is stuck in a semi complete state for a while now and Ekmett has moved on.
runeks 3 months ago [-]
I second this.
I’ve written tens of thousands of lines of Haskell, and I’ve never used lens. Also, putting it in the same category as text and vector doesn’t make sense — these are indeed unavoidable, and practically all my projects use them.
smichael 3 months ago [-]
Thirded. No lens in pandoc (50k lines of haskell), darcs (40k), most hledger packages (15k).
sbergot 3 months ago [-]
This is why I gave up on Haskell. Lens works as advertised, but is a pain to learn and to use in practice: the abstraction is tough to grasp and it is hard to form an intuition about it. The compilation errors are laughingly esoteric. The number of adhoc squwiggly operators is ridiculous. You also need to understand a lot of language extensions to get how the type checking works.
To me it looks like an impressive proof of concept for a future programming language based around it. ...
foldr 3 months ago [-]
Lenses don't really give you anything that you can't get from (a) a sensible syntax for record updates and (b) intrusive pointers. Lenses only exist because of Haskell 98's uniquely bad support for record types. Record access and update in most other languages just is simpler.
fmap 3 months ago [-]
Lenses are more than reified record labels though. There is a hierarchy of concepts that can be freely composed based on which features your data structure actually supports. In particular, lenses can be composed with traversals ("lenses" pointing at multiple fields) yielding "LINQ" like features without introducing new syntax or concepts.
The main problem with lenses is that common lens libraries look extremely complicated at first glance and seem to be solving a very simple problem. That rightfully puts most people off of learning what all the fuss is about.
0xab 3 months ago [-]
If you use lens as just a way to access records like you do in other languages, then there is absolutely nothing hard about it. Literally all you need to know is:
Name your records like "data Prefix = Prefix { prefixFieldName :: ... }" call "makeFields Prefix" once at the bottom of your file and use "obj ^. fieldName" to access and "obj & fieldName .~ value" to set.
That's it. You now have 100% of the capabilities of record update in any other language. This doesn't get any simpler in any other language. It even pretty much looks like what you would do in other languages.
I'll grant you, Haskell and lens do a terrible job of explaining subsets of functionality that are simple and let you get the job done before jumping in the deep end.
foldr 3 months ago [-]
Yeah, so it's a less good way of accessing record fields than the one present in 99% of other programming languages. Your own description makes this plain. Let's compare to Javascript:
What bugs me is when Haskell advocates try to use all the additional esoteric features of the lens library as an excuse for this fundamental baseline crappiness.
Haskell really just needs proper support for record types. Then people could use lenses when they actually need lenses (never?). At the moment, they're using lenses because they want something that looks almost like a sane syntax for record updates.
tathougies 3 months ago [-]
Record types are not a solution to the problem lens solves. Lens is a good library and a good concept. If we spent some time on it in programming class, most people would get it. When moving to non-Haskell languages, the lack of proper lenses is something I notice almost immediately.
foldr 3 months ago [-]
I know what the lens library does - I write Haskell for my day job.
In practice, the main reason people use it is to work around the deficiencies of Haskell's built-in record system:
>I never built fclabels because I wanted people to use my software (maybe just a bit), but I wanted a nice solution for Haskell’s non-composable record labels.(http://fvisser.nl/post/2013/okt/11/why-i-dont-like-the-lens-...)
The other features of lenses don't strike me as particularly useful. YMMV. I'd also question the quality of the library. It's full of junk like e.g. http://hackage.haskell.org/package/lens-4.17.1/docs/src/Cont..., which is just an invitation to write unreadable code.
tathougies 3 months ago [-]
My biggest use case for lenses that I miss in other languages is the ability to interact with all elements of a collection, or elements in deeply nested collections.
For example, if I had a list of records with a field named 'categories' holding a list of objects with a field named 'tags', and I wanted to get all of these names in one list, without nested loops, lens makes it easy 'record ^.. categories . each . tags . each' or I could update them all, etc. It's just so easy to do this kind of data munging with lens that writing fors, whiles, etc in other languages is painful.
jpittis 3 months ago [-]
> I think the smaller differences are also large enough to rule out extraordinary claims, like the ones I’ve read that say writing a compiler in Haskell takes less than half the code of C++ by virtue of the language
Specifically the "by virtue of the language" part:
Seems to me like it's unreasonable to claim the languages are on equal footing because fancy parser libraries aren't allowed to be used for the project. The fancy parser libraries exist for certain languages specifically because the languages enable them to be written. (For example in Haskell: monadic libaries, libraries that take advantage of GADTs, etc.)
trishume 3 months ago [-]
I don't think monadic parser libraries have a real claim to be that difference. All the languages listed have excellent parsing libraries that make things similarly easy, if not by language power than by grammar DSL with embeddable code snippets.
I think if any library could make a real difference for Haskell it's most likely to be http://hackage.haskell.org/package/lens, which a Haskeller friend of mine claims could likely make a lot of the AST traversal and rewriting much terser.
pwm 3 months ago [-]
While I found your article informative and interesting I think it only works in the very specific context of this assignment. Disallowing powerful language features/libraries means it's not a level playing field and thus not a fair comparison. Some languages standard libraries are tiny some are huge. Some languages have lots of advanced features. Eg. GP mentioned GADTs with which one can write type safe/correct by construction ASTs. In other words programs passing specific tests in a specific context does not imply they are comparable in terms of general correctness/robustness/maintainability (as you noted this re caught edge cases).
howenterprisey 3 months ago [-]
Hoopl (data flow analysis) would also make a difference. I did a very similar project at my university in Haskell and Hoopl definitely saved us from writing quite a bit of code. We also used parser combinators in the frontend, which I think saved us time too.
anaphor 3 months ago [-]
I've found PEGs (Parsing Expression Grammars) to make things extremely easy and terse. E.g. OMeta, Parsley, etc.
My experience with using both PEGs and parser combinators is that there isn't a huge difference in the total number of lines of code. On the other hand though, the syntax of PEGs would be easier to understand for someone who is familiar with BNF style notation.
pyrale 3 months ago [-]
Recoding a viable subset of lens would have taken 50 locs in haskell. Likewise, rewriting parser combinators would not have taken long for experienced devs. The problem here is that requiring people to recode the libs on top of the compiler is disingenuous. And if you ban idiomatic libs, you also ban most online help, tutorials, etc.
loup-vaillant 3 months ago [-]
(A suitable subset of) Parsec is about 100 lines of OCaml. Implementing a PEG syntax on top of it is about 150 lines of Haskell (or less, I'm a Haskell noob).
Building up the knowledge to get to this point however… nope, those students were better off going hand written recursive descent (or Lex/Yacc, since an equivalent was allowed).
https://github.com/LoupVaillant/Monokex/blob/master/src/pars...
http://loup-vaillant.fr/projects/metacompilers/nometa-haskel...
steveklabnik 3 months ago [-]
My understanding is that in production compilers, hand rolled parsers are the norm. Parsing libraries are cool, but just aren’t used for big projects.
sanxiyn 3 months ago [-]
Both OCaml and GHC use parser generators. It is incorrect to suggest production compilers hand roll parsers.
steveklabnik 3 months ago [-]
Two counterexamples does not disprove “a norm”. There are always exceptions!
tomasato 3 months ago [-]
Excluding the lens library (as per the article) is unusual, it provides natural getter/setter and row polymorphism type functionality.
More anecdotally, I’d argue parsing libraries are common, just look at the prevalence of attoparsec and others. But most parsing libraries in the ecosystem are parser combinator libraries which don’t support as performance and nice error messages that compilers need
garmaine 3 months ago [-]
That was where I stopped reading. If a library like lens—used by nearly every haskeller in every project—was disallowed, I don’t know what the purpose of this exercise was.
---
Patrick Walton (pcwalton) Idea: The “left-pad index, a score for Rust crates that combines small size with popularity. bascule I went ahead and crunched the numbers for crates.io 4, surveying the top 500 crates by recent downloads, dividing that number by the crate size, and coming up with the following results:
Per Patrick, here are some good candidates for potential inclusion in std: twitter.com 1 Patrick Walton (pcwalton) @bascule Neat! As I suspected, matches, atty, cfg-if, lazy_static, memoffset, scopeguard, nodrop could all easily be in the stdlib.
Ixrec 27d
I agree that any metric which includes transitive clients is going to have its usefulness quickly demolished by the "oh, crossbeam used it" problem. ...
Let's try reverse_deps / crate_size ^ 2:
matches
lazy_static
failure_derive
atty
phf_codegen
hex
phf
strum
cfg-ifRazrFalcon? 1 27d
It doesn't seem like a good metric. From crates on the screenshot, only matches and cfg-if are std worthy, imho. And both of them should be implemented as a language feature anyway.
Personally, log is my main request. Also arrayvec/smallvec. byteorder is also a very popular.
UPD: I almost forgot about language-level bitflags. The current implementation isn't user-friendly (IDE's cannot expand macros yet, so it breaks autocomplition).
scottmcm 27d
My personal favourite microcrate is matches .
Agreed, to the point that we keep having conversations about it being a feature with real syntax (one proposal was x is Some(_), for example.)
Would still be worth contemplating putting assert_matches! in std, though, the same way we have assert_eq! even with == syntax...
try_from
Good to see we've at least made progress one some of them :slight_smile: synek317 27d
Having some experience with multiple 'mainstream' languages, I was pretty surprised that following crates are not in the std:
log
rand
lazy_static or something similarOther good candidates:
syn, quote, proc_macro2 - must haves to create any proc macro,
regex,
itertools - it provides a lot of goodies but sometimes I decide to write code in more ugly way just because I'm too lazy to add the dependency or I want to reduce build time,
derive moreAlso, I don't understand why there is std::time and crate time. And chrono, or at least parts of chrono, seems to be good candidate. bascule 27d
For example, the table above contains lazy_static. It has a lot of dependencies, because it solves a specific and quite a common problem. However, I find the once-cell more elegant and ergonomic. It has less reverse dependencies mostly because it is younger (I think).
I would agree just copying and pasting lazy_static into std as-is would be a bad idea. That said, you're missing the forest for the trees: it isn't so much that we should just outright copy and paste these crates into std, but rather these crates provide features which might be good candidates for first-class std features.
In the case of lazy_static, improving Rust's core support for static data, e.g. static initializers, first-class heap-allocated statics, and associated statics, are common topics on these forums. The problem of static data, in particular, is a problem that can be solved much more elegantly and powerfully at the language level rather than at the library level, and could potentially interact with things like the trait system or program startup. Centril 27d
syn, quote, proc_macro2 - must haves to create any proc macro,
This would amount to freezing the syntax of the language itself.
Also arrayvec / smallvec .
I'm personally in favor of ArrayVec?<...> using const generics on nightly because they are sort-of a vocabulary type at least for a language like Rust.
Would still be worth contemplating putting assert_matches! in std , though, the same way we have assert_eq! even with == syntax...
Imo assert!(let A = expr && ...); seems strictly more flexible.
itertools - it provides a lot of goodies but sometimes I decide to write code in more ugly way just because I'm too lazy to add the dependency or I want to reduce build time,
We have imported some stuff from itertools over time. Would be worth going over again to see if there are some more things we could add.
derive more
Could definitely see this; adding more built-in derives for standard library traits seems sensible if obvious structural implementations can be given.
In the case of lazy_static , improving Rust's core support for static data, e.g. static initializers, first-class heap-allocated statics, and associated statics, are common topics on these forums. The problem of static data, in particular, is a problem that can be solved much more elegantly and powerfully at the language level rather than at the library level, and could potentially interact with things like the trait system or program startup.
I think it would take some convincing for me to be comfortable with baking in support for (and thereby encouraging) what are essentially global singletons into the language itself since that is often a code-smell and hacks around better architectures. Most of the times I've used lazy_static! I've come to regret it later.
first-class heap-allocated statics
There are plans :slight_smile: (Me and Oliver should probably write an RFC at some point...) RustConf? 2019 - Taking Constant Evaluation to the Limit by Oliver Schneider
and associated statics
...are generally wanted but these would allow generic statics and those do generally not mix with dylibs (which many want to ditch eventually...).
CAD97 Regular 27d
...
27d
Would still be worth contemplating putting assert_matches! in std , though, the same way we have assert_eq! even with == syntax...
Imo assert!(let A = expr && ...); seems strictly more flexible.
assert!(expr == expected) is strictly more flexible than assert_eq!(expr, expected) as well, yet we still have assert_eq! because it can give more useful errors than assert!(==). I think the same applies to assert_matches! and assert!(let =). bascule 27d
There are plans :slight_smile: (Me and Oliver should probably write an RFC at some point...)
Neat! Looking forward to seeing it.
pcwalton 1 27d
Quickly about log : I don't believe that that the style of logging we see now, as exemplified by log , will be the dominant form of how we emit instrumentation from applications and libraries in the decently-near future. I think it'll probably be far closer to OpenTelemetry or tracing .
For servers, maybe. But for those of us who are, say, writing low-level graphics code, those crates are total overkill. I don't want to learn a fancy logging system which has no benefit to the code that I'm actually writing right now; I just want to use log so that I can get printfs that actually work on Android. If log were to go away, then I'd probably migrate to just open-coding calls to __android_log_print instead of using a heavyweight logging framework, which would obviously be of no benefit.
kornel 1 26d
I think another aspect that's missing is whether these crates are the dominant and polished solutions to their respective problems, so that when std picks one "winner", most users will be happy with it.
For example, cfg-if, atty, and num_cpus seem to have one uncontroversial solution, and if they were in std, I don't think anyone would mind.
But OTOH logging is a big enough problem that it can be solved in multiple ways, and there are multiple crates competing. log works for Patrick, but I write servers, and it causes headaches for me, so that crate is not one-size-fits-all.
---
https://docs.rs/crossbeam/0.7.3/crossbeam/ Tools for concurrent programming.
---
https://taylor.fausak.me/2019/11/16/haskell-survey-results/#s2q5
" Which language extensions would you like to be enabled by default?
Multiple select. 48% 578 OverloadedStrings? 40% 489 LambdaCase? 34% 414 DeriveGeneric? 29% 356 DeriveFunctor? 29% 355 GADTs 28% 338 BangPatterns? 26% 312 FlexibleInstances? 25% 302 FlexibleContexts? 25% 302 ScopedTypeVariables? 25% 298 RankNTypes? 24% 293 DeriveFoldable? 23% 273 GeneralizedNewtypeDeriving? 22% 269 TypeApplications? 21% 251 TypeFamilies? 21% 250 DeriveTraversable? 20% 245 DataKinds? 19% 235 TupleSections? 19% 227 MultiParamTypeClasses? 18% 213 DerivingVia? 18% 212 TypeOperators? 17% 207 KindSignatures? 15% 178 DerivingStrategies? 15% 176 DeriveDataTypeable? 14% 172 MultiWayIf? 14% 168 ViewPatterns? 14% 164 StandaloneDeriving? 13% 163 ConstraintKinds? 13% 161 DeriveAnyClass? 13% 160 RecordWildCards? 13% 157 EmptyCase? 12% 144 ApplicativeDo? 12% 141 FunctionalDependencies? 11% 139 ExplicitForAll? 11% 135 InstanceSigs? 11% 128 GADTSyntax 10% 125 PatternSynonyms? 10% 122 NamedFieldPuns? 10% 120 NumericUnderscores? 212% 2563 Other
This is everyone’s favorite question. As you can see, there’s a long tail of extensions that people would like to be enabled by default. I only included things above that got at least 10% of the vote. OverloadedStrings? is always the most popular, but it actually got more popular this year. LambdaCase? continues to hang out in second place, followed by some extensions related to deriving and GADTs. My read of this is that people want some quality of life improvements without having to manually enable them, either in their Cabal file or in every source file. "
---
according to
https://2019.stateofjs.com/testing/
jest is a beloved js testing framework:
---
according to
https://2019.stateofjs.com/other-tools/
common js libraries (about 25% usage) are:
vscode is the most common text editor for js programming
webpack is the most common build tool, followed by gulp
python is the most common other language used by js devs (25%), followed by C# (15%)
---
" C++ has a monolithic standard library with an amazing set of cool stuff in it (because, as I noted in the last section, of when it was written). However, the library embeds some important assumptions. In particular, it is written for a "normal" C++ execution environment, which for our purposes means two things:
There is a heap, and it's okay to allocate/free whenever.
Exceptions are turned on.In most high-reliability, hard-real-time embedded environments, neither of these statements is true. We eschew heaps because of the potential for exhaustion and fragmentation; we eschew exceptions because the performance of unwinding code is unpredictable and vendor-dependent2. 2
There are also C++ programmers who avoid exceptions for religious reasons. I'm not among them; I have no objections to their existence, but I wish unwinding happened in predictable time.
Now, there are parts of the C++ standard library that you can use safely in a no-heap, no-exceptions environment. Header-only libraries like type_traits are probably fine. Simple primitive types like atomic are ... probably fine?
I keep saying "probably" because the no-heap, no-exception subset of the C++ standard is not clearly defined. (The C++ standards folk have, in fact, resisted doing this, arguing that it would fragment the language; this ship has most definitely sailed.) As a result, it's really easy to accidentally introduce a heap dependency, or to accidentally use an API that can't indicate failure when exceptions are disabled (like std::vector::push_back).
The Rust standard library has a critical difference: it's divided into two parts, std and core. std is like the C++ equivalent. core, on the other hand, is how std itself is implemented, and doesn't assume the existence of things like "the heap," threads, and the like. While code depends on std by default, you can set an attribute, no_std, to request only core.
This is a tiny design decision with huge implications:
By setting the #[no_std] attribute on a crate, you're opting out of the default dependency on std. Any attempt to use a feature from std is now a compile time error3 — but you can still use core.
You can trust other crates to do the same, so you can use third-party libraries safely if they, too, are no_std. Many crates are either no_std by default, or can have it enabled at build time.
core is small enough that porting it to a new platform is easy -- significantly easier, in fact, than porting newlib, the standard-bearer for portable embedded C libraries."
---
https://doc.rust-lang.org/core/index.html
---
https://github.com/real-logic/agrona
---
https://github.com/AndreaOrru/zen/blob/master/kernel/syscall.zig
---
" In general, unfair locking can get so bad latency-wise that it ends up being entirely unacceptable for larger systems. But for smaller systems the unfairness might not be as noticeable, but the performance advantage is noticeable, so then the system vendor will pick that unfair but faster lock queueing algorithm.
(Pretty much every time we picked an unfair - but fast - locking model in the kernel, we ended up regretting it eventually, and had to add fairness). "
"
" Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).
...
But at least that was a conceptually very simple model for doing locking: you create what is basically a counting semaphore by initialize a pipe with N characters (where N is your parallelism for the semaphore), and then anybody who wants to get the lock does a one-byte read() call and anybody who wants to release the lock writes a single byte back to the pipe. "
---
strenholme 1 day ago [-]
I’m already seeing a lot of discussion both here and over at LWN about which hash algorithm to use.
The Git team made the right choice: SHA2-256 is the best choice here; it has been around for 19 years and is still secure, in the sense that there are no known attacks against it.
Both BLAKE[2/3] and SHA-3 (Keccak) have been around for 12 years and are both secure; just as BLAKE2 and BLAKE3 are faster reduced round variants of BLAKE, Keccak/SHA-3 has the official faster reduced round Kangaroo12 and Marsupilami14 variants.
BLAKE is faster when using software to perform the hash; Keccak is faster when using hardware to perform the hash. I prefer the Keccak approach because it gives us more room for improved performance once CPU makers create specialized instructions to run it, while being fast enough in software. And, yes, SHA-3 has the advantage of being the official successor to SHA-2.
reply
---
https://awesomekling.github.io/pledge-and-unveil-in-SerenityOS/
---
hyc_symas 1 day ago [-]
The standard string library is still pretty bad. This would have been a much better addition for safe strcpy.
Safe strcpy
char *stecpy(char *d, const char *s, const char *e)
{
while (d < e && *s)
*d++ = *s++;
if (d < e)
*d = '\0';
return d;
} main() {
char buf[64];
char *ptr, *end = buf+sizeof(buf) ; ptr = stecpy(buf, "hello", end);
ptr = stecpy(ptr, " world", end);
}Existing solutions are still error-prone, requiring continual recalculation of buffer len after each use in a long sequence, when the only thing that matters is where the buffer ends, which is effectively a constant across multiple calls.
What are the chances of getting something like this added to the standard library?
reply
pascal_cuoq 1 day ago [-]
For what it's worth, I personally like this approach, because there are some cases in which it requires less arithmetic in order to be used correctly. And it lends itself better to some forms of static analysis, for similar reasons, in the following sense:
There is the problem of detecting that the function overflows despite being a “safe” function. And there is the problem of precisely predicting what happens after the call, because there might be an undefined behavior in that part of the execution. When writing to, say, a member of a struct, you pass the address of the next member and the analyzer can safely assume that that member and the following ones are not modified. With a function that receives a length, the analyzer has to detect that if the pointer passed points 5 bytes before the end of the destination, the accompanying size it 5, if the pointer points 4 bytes before the end the accompanying size is 4, etc.
This is a much more difficult problem, and as soon as the analyzer fails to capture this information, it appears that the safe function a) might not be called safely and b) might overwrite the following members of the struct.
a) is a false positive, and b) generally implies tons of false positives in the remainder of the analysis.
(In this discussion I assume that you want to allow a call to a memory function to access several members of a struct. You can also choose to forbid this, but then you run into a different problem, which is that C programs do this on purpose more often than you'd think.)
reply
msebor 1 day ago [-]
There are many improved versions of string APIs out there, too many in fact to choose from, and most suffer from one flaw or another, depending on one's point of view. Most of my recent proposals to incorporate some that do solve some of the most glaring problems and that have been widely available for a decade or more and are even parts of other standards (POSIX) have been rejected by the committee. I think only memccpy and strdup and strdndup were added for C2X. (See http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2349.htm for an overview.)
reply
AceJohnny?2 1 day ago [-]
> Most of my recent proposals [...] have been rejected by the committee.
Does anyone have insight on why?
reply
saagarjha 1 day ago [-]
memccpy is a very welcome addition in the front of copying strings; what else were you thinking of proposing?
reply
saagarjha 1 day ago [-]
I recently looked at a number of string copying functions, as well as came up with an API a bit similar to yours: https://saagarjha.com/blog/2020/04/12/designing-a-better-str... (mine indicates overflow more clearly). memccpy, which is coming in C2X, makes designing these kinds of things finally possible.
reply
---
https://medium.com/@peternorvig/ive-consed-every-pair-54ef5d9d93b6
I’ve used deftype, ftype, machine-type, define lisp-type, typecase, check-type, read-line class-name, class-of, defclass, load-time base-string, write-string, string-trim, run-time make-list, make-hash, make-node, string-make make-array, display, two-way, for Pete’s sake
I’ve used arrayp, boundp, minusp, iterator constantp, equalp, typep, numerator bit-nor, vector, ffloor, butlast, special or, xor, err-or, broadcast, truncate, conjugate, concatenate, package-error, allocate, update, random-state, what a terror.
---
nicoburns 5 days ago [–]
QT is excellent, but C++ is quite a tough pill to swallow for many. Especially as QT layers a macro system on top. I predict that native desktop apps will make a comeback when there's QT-quality cross-platform framework in a more approachable language (Rust, nim, or similar).
reply
_630w 5 days ago [–]
There are a few options.
https://github.com/revery-ui/revery
https://github.com/briskml/brisk
reply
soraminazuki 5 days ago [–]
Why not use Qt bindings for $YOUR_LANGUAGE_OF_CHOICE? https://wiki.qt.io/Language_Bindings
reply
mjevans 4 days ago [–]
It's rather clunky and often requires writing like C++ in whatever language of choice you're using, the worst of both worlds.
I wonder what an API with both only C (or some other low level) bindings and designed to be easy to use externally might look like.
reply
kdot 3 days ago [–]
Flutter for desktop solves this.
reply
catblast 4 days ago [–]
If you're using KDE, Qt is "native native".
You're fundamentally mistaken about where Qt sits in the stack - it effectively sits in the same place as USER32/WinForms? in Windows or NS/Cocoa GUI widgets of OSX. It is reasonable to think of it as an alternative native GUI library in that sense. If it is slower, it's because an implementation of something is slower, not because of where it lives or an abstraction cost.
Qt pretty much draws using low-level drawing APIs on the respective platform. And although Qt itself is not written in the most performance sensitive C++, it is still orders of magnitude faster than most (and it's not like Chrome doesn't pay overhead) - people rag on vtable dispatch speed but jeez its still orders of magnitude faster than something like ObjC?