proj-oot-ootNotes29

the following articles talks about various vulnerabilities in common dev practices that we should fix/watch/discourage:

https://hackernoon.com/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5

"I’d see it in your source on GitHub?!

Your innocence warms my heart.

But I’m afraid it’s perfectly possible to ship one version of your code to GitHub? and a different version to npm.

In my package.json I’ve defined the files property to point to a lib directory that contains the minified, uglified nasty code — this is what npm publish will send to npm. But lib is in my .gitignore so it never makes its way to GitHub?. This is a pretty common practice so it doesn’t even look suspect if you read through these files on GitHub?.

This is not an npm problem, even if I’m not delivering different code to npm and GitHub?, who’s to say that what you see in /lib/package.min.js is the real result of minifying /src/package.js?

So no, you won’t find my nasty code anywhere on GitHub?. I read the minified source of all code in node_modules!

OK now you’re just making up objections. But maybe you’re thinking you could write something clever that automatically checks code for anything suspicious.

You’re still not going to find much that makes sense in my source, I don’t have the word fetch or XMLHttpRequest? anywhere, or the domain that I’m sending to. My fetch code looks like this:

“gfudi” is just “fetch” with each letter shifted up by one. Hard core cryptography right there. self is an alias for window.

self['\u0066\u0065\u0074\u0063\u0068'](...) is another fancy way of saying fetch(...).

The point: it is very difficult to spot shenanigans in obfuscated code, you’ve got no chance.

(With all that said, I don’t actually use anything as mundane as fetch, I prefer new EventSource?(urlWithYourPreciousData) where possible. That way even if you’re being paranoid and monitoring outbound requests by using a serviceWorker to listen to fetch events, I will slink right by. I simply don’t send anything for browsers that support serviceWorker but not EventSource?.) " -- [1]

btown 1 day ago [-]

One of the biggest problems here is that there is no “chain of custody” from Github source to uploaded NPM module; otherwise one of the developers using the malicious package could have audited the source code before including it in their own code. ‘npm publish’ would ideally insist on reproducible builds, enforce this by minifying or compiling packages itself, and finally encourage the community to always audit the code associated with a module. Of course, people are lazy, NPM has no incentive to incur that server and engineering overhead, and someone could sneak in code anyways with a minor version update... There’s no clear solution here, and I think the only thing keeping up this house of cards is that there are much easier ways for black hats to make money.

reply

djsumdog 1 day ago [-]

Even if you had that, no one is going to inspect all that code. npm is a cluster-farkle of insane amounts of packages.

The whole point of the article is you should implement CSP.

reply

((note: 'CSP' is a web thing called 'content security policy'. The post author also recommends not running third-party code on pages that collect passwords or credit-card numbers))

JonathonW? 21 hours ago [-]

I have a fairly simple Node project at work; it pulls in nine runtime dependencies, plus 13 development-time dependencies (most of those are babel or eslint-related).

Assuming none of those are pulling shenanigans like mentioned in the article (distributing different code than in their source repositories, or deliberately obfuscating malicious code), it's not completely unreasonable for me to go through and audit my direct dependencies. But, since the Javascript standard lib is crap, all of my direct dependencies have their own large pile of dependencies, which themselves depend on a bunch of stuff, and so on.

By the time it's all said and done, my "simple" Node project pulls in several hundred dependencies (I didn't go through and count, but my 'yarn.lock' on that project has ~4200 lines). I can't audit all of that code.

(This is particularly bad in Node and Javascript, but applies to other languages too. I don't think anyone's ever fully audited all of our Nuget dependencies, or Python dependencies... fortunately, those both tend to be more self-contained, so at least we know what we're getting there.)

reply

jmadsen 1 day ago [-]

I'm more of a back-end dev who doesn't know all the ins and outs of the actual software used - can someone explain to me why this is an npm problem, and not an excessive dependencies problem?

I thought npm was simply a package manager - I don't see anything in the article that is specific to npm, except he happens to say that word.

reply

ufmace 4 hours ago [-]

It's kind of an excessive dependencies problem, except exacerbated by two things. One is Javascript's poor stdlib. This means that not only are you tempted to include lots of little packages to do basic things, but so are all of the big packages that you include to do big things for you, and all of the packages they include, etc. Often there are a bunch of different packages for doing the same basic things, and nobody agrees on which one, so you may end up with 5 different packages that do the same thing required by various packages you use.

Two is that much of it is expected to be served to the browser, so it's minified. Who audits that the minified code is actually the same as the published Github code?

At least in Ruby and Python, the code from Rubygems/Pip should exactly match that version on Github. Not that anyone necessarily audits that either, but at least it's easier.

reply

---

elihu 1 hour ago [-]

Yes, and more than that we need a good, better thought-out modern successor to POSIX-type interfaces. For instance: I think a process ought to be able to have more than one current working directory and possibly more than one user-ID at a time. It should have the option to insert data in the middle of a file without having to manually shift the rest down. Shell scripts should be able to interact with the filesystem via transactions that can be rolled back if anything fails. Programs should be able to have typed input and output, checked by the shell and/or OS, which could also enable command-line tab-completion to search installed programs for any that match a desired type.

I have a bunch of other random gripes with POSIX-style OS interfaces and find it a bit frustrating that these interfaces haven't changed much in decades and seem to have attained a lot of inertia of the "we do it this way because we've always done it this way" kind.

reply

---

Aside from the almost completed features in the pipeline, I only see procedural macros (macros 2.0) and whatever happens to make async code easier to write[1] really impacting day-to-day code for most people.

[1] I'd really like to see F# Computation Expressions instead of async/await. I know the language experts have said Haskell-like do notation doesn't work in Rust but I'm not sure if the F# tweaks would make it work or not.

reply

bjz_ 5 hours ago [-]

> [1] I'd really like to see F# Computation Expressions instead of async/await. I know the language experts have said Haskell-like do notation doesn't work in Rust but I'm not sure if the F# tweaks would make it work or not.

I'd prefer a systems-y, zero-cost take on algebraic effects, similar to what OCaml is going to get. Could be much more extensible, and open things up to annotating whether functions panic or not, access global state, etc. Alas it's still a tricky research problem, even after all these years. There were some nice discussions from ICFP here - the comments strayed into talking about how effects might be implemented without a GC: https://www.youtube.com/watch?v=DNp3ifNpgPM

reply

lobster_johnson 39 minutes ago [-]

Is there a paper or writeup anywhere on what the algebraic effects system coming to OCaml is going to be like? (Does it have a name that can be googled?)

reply

grayrest 22 minutes ago [-]

It's part of the Multicore OCaml [1] effort.

[1] https://github.com/ocamllabs/ocaml-multicore/wiki

reply

jeremyjh 5 hours ago [-]

You can already get “Monad comprehensions” through the mdo and mdo-futures crates.

reply

--- " ...I would like to see a garbage-collected Rust. Take away the borrow checker, and you still have a modern language with UTF-8 support out-of-the-box, algebraic data types, pattern matching, a focus on performance, and great tooling (cargo + rustup = <3).

OCaml almost fits the bill (Rust is inspired by OCaml after all), but the tooling around it is lacking to put it mildly. "

dom96 8 hours ago [-]

This sounds a lot like Nim to me. Give it a go if you haven't already. It's a systems programming language that primarily uses a garbage collector.

reply

currymj 8 hours ago [-]

rust at one point had garbage-collected references. And there's still Rc/Arc types for reference counting.

I get what you're saying though. Another poster mentioned Swift and indeed Graydon Hoare, Rust's creator, is now working on Swift at Apple. And I believe some kind of notion of borrow checking/lifetimes is supposed to be coming to Swift in the future?

reply

pjmlp 8 hours ago [-]

Yes, the initial support is already there in Swift 4, it is called enforced exclusive access to memory in Apple documentation.

reply

mmirate 5 hours ago [-]

That's easy, just change your program so that every value of type T is now a value of type Arc<Mutex<T>>.

reply

weberc2 1 day ago

parent flag favorite on: Rust in 2018: easier to use

I have a similar desire. OCaml and others push heap-allocated reference types by default (limiting your options for controlling allocations), and many have weak tooling and library support. F# looks neat, but it seems to have a lot of baggage relating to C# interop. F# also doesn't compile static binaries yet. I think I decided the best shot is to build a language that compiles to Go, since it has the right semantics, great libraries and tooling, and a world-class runtime (GC, painless async IO, lightweight thread scheduler for real parallelism). Obviously this is still a huge effort and probably a pipe dream, but its the path of least resistance to get a "Rust with GC".

...

yen223 11 hours ago [-]

I also wished the ReasonML? folks started from scratch, instead of inheriting OCaml's baggage (no forward references, a plethora of file types to deal with, no UTF8 strings without bringing in an external lib, and so on).

yen223 1 hour ago [-]

> Not sure what you mean by "no forward references"?

I probably got the name wrong, but it's the ability to use a function before it is defined.

In OCaml/ReasonML?, you'd have to use the rec keyword and structure your codebase in a particular way to define mutually-recursive functions. It is a small but noticeable papercut, especially since recursion is so common in a functional language.

reply

bjz_ 5 hours ago [-]

> Not sure what you mean by "no forward references"?

I'm guessing they are talking about having implicit mutual recursion between items in a module, like Haskell has.

reply

Manishearth 11 hours ago [-]

This sounds like Swift :)

(ARC, not tracing GC, but still.)

reply

littlestymaar 7 hours ago [-]

If Arc counts, then it juste sounds like Rust actually ;).

reply

K0nserv 7 hours ago [-]

I dunno if I agree, the borrow checker in Rust requires a lot more active thinking and intervention to get memory right. Objective-C and Swift ARC is really straightforward and there are few gotchas. Kinda like GC.

reply

littlestymaar 7 hours ago [-]

If you use Arc everywhere you just end up with the same behavior than Swift (except for closures I guess).

reply

steveklabnik 31 minutes ago [-]

Small note: Swift's ARC and Rust's Arc are different: Rust's Arc is atomic reference counting, while Swift's is automatic reference counting.

reply

kazagistar 4 hours ago [-]

I would like to see a different GC rust... an ability to integrate into an external GC system that it is embedded in, like Javascript or Java or whatever else. In other words, `Gc<_>` would mean "owned by the other runtime". I think there was a proposal like this, though I am not sure what happened to it.

reply

---

https://jsandler18.github.io/

"

Now, we are going to write some library code. We create the files src/common/stdlib.c and src/common/stdio.c and corresponding header files.

In stdlib.c, we define the functions memcpy and bzero, as these will come in handy later, and we define itoa (integer to ascii) to make debugging easier.

In stdio.c, we define getc, putc, gets and puts as general purpose IO functions. We do this even though uart.c had uart_putc and uart_puts because later we are going to want to swap out uart_putc for a function that renders text to an actuall screen, and it will be easier to replace one call to uart_putc here than many possible places. "

e also writes a malloc and a free (and a mem_init). 4k pages. e also reserved 1 MB for the kernel heap.

https://jsandler18.github.io/extra/atags.html

" The Atags is a list of information about certain aspects of the hardware. This list is created by the bootloader before our kernel is loaded. The bootloader places it at address 0x100, and also passes that address to the kernel through register r2. If you look at the function signature of kernel_main, void kernel_main(uint32_t r0, uint32_t r1, uint32_t atags), you can see that the atags pointer is the third argument.

The Atags can tell us how large the memory is, where the bootloader put a ramdisk, what is the serial number of the board, and the command line passed to the kernel via cmdline.txt

An Atag consists of a size (in 4 byte words), a tag identifier, and tag specific information. The list of Atags always starts with the CORE tag, with identifier 0x54410001, and ends with a NONE tag, with identifier 0. The tags are concatenated together, so the next tag in the list can be found by adding the number of bytes specified by the size to the current Atag’s pointer. "

http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#appendix_tag_reference

Table 3. List of usable tags Tag name Value Size Description ATAG_NONE 0x00000000 2 Empty tag used to end list ATAG_CORE 0x54410001 5 (2 if empty) First tag used to start list ATAG_MEM 0x54410002 4 Describes a physical area of memory ATAG_VIDEOTEXT 0x54410003 5 Describes a VGA text display ATAG_RAMDISK 0x54410004 5 Describes how the ramdisk will be used in kernel ATAG_INITRD2 0x54420005 4 Describes where the compressed ramdisk image is placed in memory ATAG_SERIAL 0x54410006 4 64 bit board serial number ATAG_REVISION 0x54410007 3 32 bit board revision number ATAG_VIDEOLFB 0x54410008 8 Initial values for vesafb-type framebuffers ATAG_CMDLINE 0x54410009 2 + ((length_of_cmdline + 3) / 4) Command line to pass to kernel

then they implement gpu_putc, which prints a character to the screen hardware.

then they implement interrupts

https://jsandler18.github.io/extra/interrupts.html

" This set of addresses, also known as the Vector Table, starts at address 0. Below is a table that describes each exception Address Exception Name Exception Source Action to take 0x00 Reset Hardware Reset Restart the Kernel 0x04 Undefined instruction Attempted to execute a meaningless instruction Kill the offending program 0x08 Software Interrupt (SWI) Software wants to execute a privileged operation Perform the opertation and return to the caller 0x0C Prefetch Abort Bad memory access of an instruction Kill the offending program 0x10 Data Abort Bad memory access of data Kill the offending program 0x14 Reserved Reserved Reserved 0x18 Interrupt Request (IRQ) Hardware wants to make the CPU aware of something Find out which hardware triggered the interrupt and take appropriate action 0x1C Fast Interrupt Request (FIQ) One select hardware can do the above faster than all others Find out which hardware triggered the interrupt and take appropriate action "

" Pending registers indicate whether a given interrupt has been triggered. These are used in order to determine which hardware device triggered the IRQ exception. Enable registers enable certain interrupts to be triggered by setting the appropriate bit "

" The Raspberry Pi has 72 possible IRQs. IRQs 0-63 are shared between the GPU and CPU, and 64-71 are specific to the CPU. The two most important IRQs for our purposes will be the system timer (IRQ number 1) and the USB controller (IRQ number 9). "

they provide a function register_irq_handler. they setup the system timer peripheral.

https://jsandler18.github.io/tutorial/process.html

each process gets a Process Control Block (PCB)

" typedef struct pcb { proc_saved_state_t * saved_state; Pointer to where on the stack this process's state is saved. Becomes invalid once the process is running void * stack_page; The stack for this proces. The stack starts at the end of this page uint32_t pid; The process ID number DEFINE_LINK(pcb); char proc_name[20]; The process's name } process_control_block_t; "

note: proc_saved_state_t is just the saved contents of each of the registers. DEFINE_LINK is the link for a linked list.

they create a "list of processes that want to run. This is called the Run Queue". They create a scheduler which runs every 10ms (using the system timer interrupt). They use round robin scheduling. They export a 'void create_kernel_thread(kthread_function_f thread_func, char * name, int name_len) {'.

The next part, not yet written, is titled "Locks".

actually this whole thing was a good read/skim. i added 'part 9: locks and on' to ootToReads.

---

in Rust, there is some trait (interface) called 'send' that allows one to send 'owned' data structures from one 'execution context' (task/thread) to another (so i'm guessing this is for using move semantics to transfer something from one thread to another?)

---

wiremine 6 hours ago [-]

I'm a fulltime IoT? software consultant, and I wish we'd see more of these initiatives. A few thoughts:

1. The problem isn't the transport layer, it's the application layer. The transport layer is mostly solved via MQTT, COAP, Thread, etc. Sure, we can improve there, but the real problem is the application layer. So I applaud Mozilla's attempt to bring something into this space.

---

4. JSON is a non-starter long-term: it sucks for small devices. They need a binary format that is easy to parse.

ASN.1?

https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One

> ASN.1 is similar in purpose and use to protocol buffers and Apache Thrift, which are also interface description languages for cross-platform data serialization. Like those languages, it has a schema (in ASN.1, called a "module"), and a set of encodings, typically type-length-value encodings. However, ASN.1, defined in 1984, predates them by many years. It also includes a wider variety of basic data types, some of which are obsolete, and has more options for extensibility. A single ASN.1 message can include data from multiple modules defined in multiple standards, even standards defined years apart.

reply

jnwatson 4 hours ago [-]

Asn.1 is great, and I see a lot of folks relearning the hard-earned lessons of it. The only problem is that there are no good open source implementations.

reply

ofek 2 hours ago [-]

https://github.com/wbond/asn1crypto

reply

kevin_thibedeau 2 hours ago [-]

All of the open source TLS libs have to support enough of ASN.1 to process certificates.

reply

mkj 5 hours ago [-]

cbor looks better for small devices?

reply

---

wiremine 6 hours ago [-]

I'm a fulltime IoT? software consultant, and I wish we'd see more of these initiatives. A few thoughts:

1. The problem isn't the transport layer, it's the application layer. The transport layer is mostly solved via MQTT, COAP, Thread, etc. Sure, we can improve there, but the real problem is the application layer. So I applaud Mozilla's attempt to bring something into this space.

2. Bootstrapping this will require a substantial number of hardware vendors to sign on, both at the edge and at the hub layers. IMHO this is why Google Weave [1] never took off in it's original incarnation. Bootstrapping this like they did with web stuff isn't enough, because this isn't the web.

3. Devices are only part of the problem: We need a software services layer, too. Think time services, IFTTT-like orchestrators, media services, etc.

4. JSON is a non-starter long-term: it sucks for small devices. They need a binary format that is easy to parse.

5. Request-Response isn't the right pattern for most use cases.

6. The Property/Action/Event concept is a solid start.

7. For the love of everything holy, add versioning!

[1] http://internetofthingsagenda.techtarget.com/feature/Google-...

Edit: Grammar.

---

geofft 8 hours ago [-]

Speaking as a developer who primarily writes Python and secondarily C, Rust, C++, bash, and others as necessary, who has in fact worked on a large-scale C++ UI project in Qt on the Linux desktop running alongside PyGTK? UIs, and who doesn't actively write JavaScript?, I have to say that JavaScript? is a really good choice of language, especially for UI, because it has a strong bias towards event-based programming and callbacks instead of blocking code and threads. If you're going to pick the right tool for the job of UI, I'd put JS a bit ahead of Rust (because of immaturity; that is likely to change real soon), which I'd put a lot ahead of C++ (because you're going to screw up threading) a little bit ahead of C and Python (because you're definitely going to screw up threading). If you're interested in rapid prototyping, I'd move Python up several notches - but not past JS, which is what I reach for these days for throwaway prototypes if I think I know enough JS to pull it off.

(If someone has a high-quality way to use asyncio or Twisted with the Qt event loop, I'd probably bump Python a bit higher.)

reply

---

since i think the perfect writing system wouldn't have uppercase/lowercase, maybe Oot should not use capitalization for anything?

---

3 "what's new in python 3" thingees (i already read and took notes on them):

[2] [3] [4]

---

how much data can you fit in a UDP packet?

The IPv4 max payload size for UDP is 65507 bytes [5] (65,535 - 8 byte UDP header − 20 byte IP header). However, in reality, you want to use a much lower limit to avoid packet fragmentation ([6] claims that firewalls may drop fragmented packets, and that IPv6 will drop them).

In https://stackoverflow.com/questions/14993000/the-most-reliable-and-efficient-udp-packet-size the top answer suggests assuming a 1500 MTU (btw i think a maxed size ethernet packet payload is around 1500 bytes?) and computing: 1500 MTU - 20 IP hdr - 8 UDP hdr = 1472 bytes. But that answer goes on to note that the minimum MTU size that an host can set is 576, and IP header max size can be 60 bytes, and then computes 576 MTU - 60 IP - 8 UDP = 508.

On https://stackoverflow.com/questions/1098897/what-is-the-largest-safe-udp-packet-size-on-the-internet the top answer suggests 512 "although even that does not leave quite enough space for a maximum size IP header". Other sources also use 512 [7]

Compare to my 1k message size in my 'fabric' idea.

Ed25519 signatures are 64 bytes (512 bits).

So an Ed25519-signed payload of the 'safe' size of 508 would leave 508 - 64 = 444 bytes for signed payload (or, 222 16-bit words).

222 words is greater than 128+64 = 192. So one could imagine a 128 word sub-payload and a 64-word sub-header (you still have 30 16-bit words to spare so this is very conservative).

So, when thinking of data structures that might be sent in one freestanding message (as opposed to 'bulk data' eg a picture that would frequently be sent over many messages), one might try to make sure it fits into 128 words (16-bit words) (or, 256 bytes).

Compare to cache lines, which are frequently 64 bytes (32 16-bit words; 512 bits) -- so 4 cache lines (2k bits) fit in one 128-word subpayload.

But could we go up to 1472? http://www.pcvr.nl/tcpip/udp_user.htm did an experiment in around 1993:

" Fifteen countries (including Antarctica) were reached and various transatlantic and transpacific links were used. Before doing this, however, the MTU of the dialup SLIP link between the author's subnet and the router netb (Figure 11.12) was increased to 1500, the same as an Ethernet.

Out of 18 runs, only 2 had a path MTU of less than 1500. One of the transatlantic links had an MTU of 572 (a value not even listed as a likely value in RFC 1191) and the router did return the newer format ICMP error. Another link, between two routers in Japan, wouldn't handle a 1500-byte frame, and the router did not return the newer format ICMP error. Setting the MTU down to 1006 did work.

The conclusion we can make from this experiment is that many, but not all, WANs today can handle packets larger than 512 bytes. Using the path MTU discovery feature will allow applications to take advantage of these larger MTUs. "

it's also work thinking of stuff like

https://github.com/libcsp/libcsp

" Cubesat Space Protocol - A small network-layer delivery protocol designed for Cubesats"

...

The protocol is based on a 32-bit header containing both transport and network-layer information. Its implementation is designed for, but not limited to, embedded systems such as the 8-bit AVR microprocessor and the 32-bit ARM and AVR from Atmel.

...

The idea is to give sub-system developers of cubesats the same features of a TCP/IP stack, but without adding the huge overhead of the IP header. The small footprint and simple implementation allows a small 8-bit system with less than 4 kB of RAM to be fully connected on the network.

...

Very Small Footprint 48 kB code and less that 1kB ram required on ARM

"

https://github.com/libcsp/libcsp/blob/master/doc/mtu.rst gives some example MTUs for this case: 256 bytes, 200 bytes, 100 bytes

What MTUs are found on the internet? https://blog.cloudflare.com/path-mtu-discovery-in-practice/ says "The minimal required MTU for all IPv6 hosts is 1,280 ( http://www.delaat.net/rp/2012-2013/p55/presentation.pdf says something about 1232?), which is fair. Unfortunately for IPv4 the value is 576 bytes. On the other hand RFC 4821 suggests that it's "probably safe enough" to assume minimal MTU of 1,024."

Note however that 1024 MTU gives you <1024 payload in UDP. So the largest power-of-two that fits in our payload is still 512 bytes here (256 16-bit words). Otoh you could say that's a 512 byte SUB-payload, with a 256 byte sub-header, plus lots of spare room still (enough for a signature certainly). So our data structure max is now 256 words, instead of 128.

In this experiment, the smallest path MTU observed was 1240: https://www.nlnetlabs.nl/downloads/publications/pmtu-black-holes-msc-thesis.pdf . "Remarkable is the fact that the IPV4 minimum MTU of 576 bytes is not used at all."

This guy recommends around 1400: https://stackoverflow.com/questions/2613734/maximum-packet-size-for-a-tcp-connection/3074427#3074427

So it seems to me that a payload size of 1k isn't crazy, even though that's above the RFC 4821 suggestion.

---

" The computers we had practically encouraged you to get into programming right out of the box. You booted into the Basic interpreter, and that was effectively your command line interface to the operating system. Depending on the computer, and the Basic interpreter you had, you might very easily be able to get right into doing some graphics, and maybe making it produce some sounds with just a few commands...

The computers we had had two screen modes. One was character-based, and one was based in bitmap graphics. Both modes were just different views on screen memory, and each represented what was in it differently. " -- [8]

" Wozniak designed the Apple II with some specific ideas in mind. He wanted to improve on the design of the Apple-1, add additional memory and input/output options (the slots), plus he wanted to make it possible to do the game Breakout in software. This gave the parameters for the lo-res graphics, the colors, and the single-bit sound, as well as the game paddle inputs. Had Woz wanted a computer to keep track of database files, or for word processing, likely he would have focused more on the text display (supporting upper and lowercase) and data storage (a more robust software interface for cassette storage). But without the color and sound, it is hard to say whether or not the Apple II would have had as much of an impact on the market as it did. " -- [9]

" Then there is BASIC. I hate to say it, but Apple BASIC sucks compared to Commodore BASIC. granted, Apple at least has graphics commands, I can give them credit for that. But the screen editor is very un-userfriendly. You hit the delete key (positioned where backspace is on the C64) and what do you get? Rather than backing up, it puts some funny character on the screen. What is up with that? Editing lines in BASIC is also very painful. Oh.. and I tried to write a program and found there doesn't seem to be any way to change text colors in BASIC and the GET statement doesn't work like the Commodore version. It actually halts the program until a key is pressed. That makes it impossible to write a program in pure BASIC that does any kind of active routine while waiting on a keystroke. " -- [10]

" The IIe's double hires mode is quite a bit better than the hires mode in that sixteen colors can be used rather than six. Many Sierra games used this mode for the IIe. Then there is the issue of the poor quality sound. It was interesting on what programmers did to work around this problem. The Electronic Duet program allowed two "voices" to be used, though it would halt any other operations, rendering it useless for action games. A similar method was used in the title screens of Prince of Persia and Dark Lord. The Mockingboard sound card was neat, yet it had little support and didn't come close to the quality of the SID chip. " -- [11]

"

    Microprocessor CPU:
        MOS Technology 6510/8500 (the 6510/8500 is a modified 6502 with an integrated 6-bit I/O port)
        Clock speed: 0.985 MHz (PAL) or 1.023 MHz (NTSC)
    Video: MOS Technology VIC-II 6567/8562 (NTSC), 6569/8565 (PAL)
        16 colors[89]
        Text mode: 40×25 characters; 256 user-defined chars (8×8 pixels, or 4×8 in multicolor mode); or extended background color; 64 user-defined chars with 4 background colors, 4-bit color RAM defines foreground color
        Bitmap modes: 320×200 (2 unique colors in each 8×8 pixel block),[90] 160×200 (3 unique colors + 1 common color in each 4×8 block)[91]
        8 hardware sprites of 24×21 pixels (12×21 in multicolor mode)
        Smooth scrolling, raster interrupts
    Sound: MOS Technology 6581/8580 SID
        3-channel[89] synthesizer with programmable ADSR envelope
        8 octaves
        4 waveforms per audio channel: triangle, sawtooth, variable pulse, noise
        Oscillator synchronization, ring modulation
        Programmable filter: high pass, low pass, band pass, notch filter
    Input/Output: Two 6526 Complex Interface Adapters
        16 bit parallel I/O
        8 bit serial I/O
        24-hours (AM/PM) Time of Day clock (TOD), with programmable alarm clock[92]
        16 bit interval timers
    RAM:
        64 KB, of which 38 KB (minus 1 byte) were available for BASIC programs
        512 bytes color RAM (memory allocated for screen color data storage)[93]
        Expandable to 320 KB with Commodore 1764 256 KB RAM Expansion Unit (REU); although only 64 KB directly accessible; REU mostly intended for GEOS. REUs of 128 KB and 512 KB, originally designed for the C128, were also available, but required the user to buy a stronger power supply from some third party supplier; with the 1764 this was included. Creative Micro Designs also produced a 2 MB REU for the C64 and C128, called the 1750 XL. The technology actually supported up to 16 MB, but 2 MB was the biggest one officially made. Expansions of up to 16 MB were also possible via the CMD SuperCPU.
    ROM:
        20 KB (9 KB Commodore BASIC 2.0; 7 KB KERNAL; 4 KB character generator, providing two 2 KB character sets)

Input/output (I/O) ports and power supply " -- [12]

" The Apple II had a clear edge when it came to business applications. The pioneering spreadsheet Visicalc gave the machine an early advantage that it never relinquished to any other 8-bit machine.

When it came to games, things get a bit blurrier. The Apple II had a major advantage when it came to role-playing games, partly because software developers assumed most Apple II owners had two disk drives, and they wrote their games to take advantage of that. Most C-64 software assumed a single-drive machine, even though dual-drive 64s were fairly common. Plus, the faster disk drives on the Apple II made role-playing games more enjoyable.

The C64’s advantage was with arcade-style games because of its 3-voice sound chip and sprite graphics. Fast movement was possible on the Apple II but it required a lot more complex programming, and the Apple II’s beeper couldn’t compare with the 64’s mini-synthesizer. " -- [13]

" Sound IIgs sound stomps on the SID or any 8 bit machine of the day. The Mockingboard upgrade gave Apple IIs very good sound. Too bad it wasn't supported by more titles. Stock Apple II sound sucks. The C64 beats the stock Apple II and more titles support the SID than Mockingboard and IIgs sound combined. " -- [14]

" The Apple II wins, at least out of the box. The Commodore may have better graphical and sound capabilities, but trying to tap into those capabilities in BASIC is easier said than done. Actually, AppleSoft? BASIC doesn't make sound programming all that easy either, but there are a lot of nice commands for creating both lo-res and hi-res graphics. " -- [15]

" The BASIC interpreter is derived from the same codebase, but the Apple version has graphics functionality built-in, which the C64 doesn't. But the C64 version has bitwise logic, which was removed from the Apple version. " -- [16]

"

The C64 had great bang for the buck! And it got used for lots of stuff, despite it's business display limitations. No real 80 column support impacted things. That was a big deal at that point in time. A nicely configured C64 ended up being used for real estate contract stuff and it worked out really well, for the cost. A similar task performed on the Apple ][ literally was no contest! Data access was faster, display capabilities better, software library more feature and title rich, etc... The end product on each varied significantly, with the Apple able to do seriously good quality output, if somebody wanted to pay for that. " -- [17]

" In terms of scientific uses, test measurement, industrial automation, data logging, sensors, interfacing to control systems, I/O, higher end graphics output, plotting, CAD design, etc... an Apple 2 could be fitted with great devices, in the box, and supported as in the box things, due to how the system ROM was written. A C64 just didn't have any of that design vision incorporated into it, and as a result just didn't see those kinds of things being possible. Apple 2 computers were used as development computers regularly too, this due to effective storage capability, robust programming tools and the ability to interface to development type hardware as needed on cards. " -- [18]

" Gaming on an Apple is really interesting! It doesn't have fancy graphics chips, nor sound. Lacking a sound chip really impacted the machine, but not having a graphics chip just didn't to the same degree. No wait state video access and the basic ability to page flip was just enough to keep the machine relevant for most titles, resulting in often surprisingly acceptable games, despite very low expectations overall.

I got my Atari to game on, as did a ton of people, and I programmed it a lot too, as did a lot of people. I got the Apple to get shit done, as did a lot of people. A well equipped Apple produced writing and graphics for me well into the 90's, and by then it was cheap ass to get done and surprisingly effective. The next machine I got that worked out that way happened to be a PC, which mirrored many of the great Apple design decisions. And the PC didn't prioritize it's gaming display capabilities then either, but it was the machine to get shit done.

(which could be said of the 16 bit computers as well, though I personally did not entertain that path, moving to UNIX / SGI instead) " -- [19]

" I don't believe there was any special vision incorporated into the Apple II design. These following features came together synergistically to add up to more than the sum of their parts.

1- Completely open architecture built around cheap TTL components. No custom chips. 2- Fully, and most excellently, documented ROM listings. 3- 8 expansion slots. And built-in A/D converter. 4- A beyond ultra-reliable & elegant disk storage system. Fast, lightweight firmware, parallel operation. 5- Lightning fast text response and immediate reset when demanded.

...

Did you know that when you do disk access on an Apple II, the 6502 must stop what it was doing and take command of the disk? The 6502 controls the stepper motor, the spindle motor, and the data flow to the sequencer. The 6502 sets the tone of the entire operation.

Everything in the Apple II was as close to the bare metal of the TTL logic as possible and yet at the same time remain suitable for consumer use. Little or no firmware to get in the way or slow things down. " -- [20]

"

That the Apple II's design could complete against a later generation was testament to its design. Consider that the Apple II has much much more in common with the dedicated pong-style units and very early arcade motherboards, single-board hobbyist computers like the KIM-1, or the IMSAI and Altair S-100 bus systems; than it does with a C64 and A800.. Individual TTL logic, being the key point here. No custom chips.

Once you get into custom chips, the hardware starts closing up and complexity slows things down. Look what happened to the Amiga! This is a system that collapsed under its own weight seemingly. The machine had great specs on paper, and by all rights and means it should have been able to do arcade-perfect ports of many earlier games. But the damned thing was buried under bloated & buggy firmware trying to manage those custom chips.

We didn't see this problem on the C64 or A800 because the custom chips weren't complex enough. Not yet. They were still simple and "bare metal enough" to be an aid as opposed to a hindrance. The 6502 still had a good level of authority at the clock-cycle level. Now look at the Amiga camp - you had all sorts of shit going on on the bus. Too much police in the street not communicating with each other. The city traffic slows down while each cop radios to base for instructions. Too much "unrelated-to-what-you're-working-on" code has to be executed.

While the likes of this complexity had never been seen in a toy computer before it was still badly implemented, it was only good for advertising fodder and niche applications. These custom chips needed a lot of CPU overhead to keep them sync'd. Now imagine throwing a memory management chip in there (onboard later 68xxx processors), how the fuck can anything get done? The system had barely enough power to boot itself! Oftentimes my shitbox Apple II would outperform the Amiga in untold number of instances.

In contrast, look at the 1st MAC systems. The same philosophy of the II applied. Do everything with basic hardware. Allow efficient bus usage and let the application data flow unhindered. In that environment software can work magic.

Thank god the C64 & A800 didn't bloat up like these early 16-bit machines. Same thing with the VCS, it has a custom chip, the TIA, but it is a low-count simplistic beast not weighing anything down. " -- [21]

" Text density on the screen differed somewhat, but was overall quite similar (with the exception of the VIC-20): computer rows x columns char per screen PET 25 x 40 1000 Apple II 24 x 40 960 TRS-80 16 x 64 1024 Atari 24 x 40 960 VIC-20 23 x 22 506 C64 25 x 40 1000 " -- [22]

" The one final point I want to reiterate is that the Apple II expansion slots provided direct connections to the address/data lines along with many other timing signals and strobes - without firmware interaction. This is also what the IBM PC did and the toy computers did not. " -- [23]

"

I ended up moving right into manufacturing and engineering work. Atari, C64, and CoCo? 3 were good for the soul, programming, gaming, hacking on the chips and sometimes doing quick little projects with the various ports.

But, it was the Apple 2 and PC that provided access to experiences that proved to be worth it, translating right into making good money, which I did in manufacturing and automation. If you had an Apple, and a few contacts, you were in. Same for the PC. ... Using all the business apps paid very well as I had the jump on a lot of people, right from school able to do spreadsheets, various word processing, publishing and even fairly professional quality collateral. The Apple 2 could do postscript output, and that was killer when it came to those things. Helped to pay for some college on my part.

CAD was something very powerful that I first did on the Apple as well, moving rapidly to the PC where it's greater memory space, speed and cheaper / faster storage options made mechanical CAD a reality. Mix in the programming skills, and I sold my first software package for CAD systems, making me enough money to really get a great PC, and that's remained true. For me, the computing hobby has always paid for itself and always will. ... Lots of C64 / Atari users just paid and played and that's fine, but that's also not the typical Apple scene and that's something people should understand today because that difference was very significant and often overlooked in lieu of the sexy games and such we all like so much. " -- [24]

"

I always thought of the Apple II as the last single-board hobbyist computer. S-100's, RCA Cosmac VIP, Kim-1, TRS-80 Model I, IMSAI, Altair, Heathkit, countless others. The Apple II belongs to the same heritage and engineering philosophy as those machines. Sooner or later, a machine would hit on the right combination of features to make it long-lived. And the Apple II happened to be it

Everything designed after it was geared toward the consumer market with a purpose. Features were now being carefully chosen in a price/performance matter with a goal in mind. Remember there was no goal or marketing benchmark set for the Apple II, the joe-blow consumer and technical hobbyist alike were still discovering what a computer was and what one could do with them when the 2 series was born. It is the last of the hobby systems, last of the green-screen terminal type devices. And it guided the transition from basement-dwelling to mass-marketing computers as a practical or fun tool. " -- [25]

" 6 color high res screens are enough to do basically anything. 4 color high res screens aren't, " -- [26]

" 6 color screens, with some high resolution color capability meant for some killer pixel art that is very difficult to reproduce on the machines with custom hardware. That hardware brought more colors, which is a good thing, but the trade-off was not having smaller color dots, and or significant freedom of movement on multi-color scenarios. These differences impacted how games were presented, and the fun part here is exploring that, of which the Apple has a lot to contribute. " -- [27]

" I always felt the Apple II could push text around like nobody's business. The C64 & A800 had to go through tons of gymnastics to get a single character of text on-screen. It was, like, just there. No custom chips to program registers, no lengthy firmware write routines. Just a simple text generator circuit. Don't get me started on the Amiga, it was even worse - ughh! There have been times when I wanted to make a personal journal entry of perhaps a few lines, a short paragraph at most. Sometimes I could complete this task on the Apple II entirely in the time it would take to get my Amiga booted and the software loaded and the data file (text.doc for example) loaded. Business can't afford that kind of lollygagging. " -- [28]

to summarize what i read when i looked up apple iie vs c64:

---

 Jeff_Brown 5 hours ago [-]

I used Java, C and C++ before coming to Python. Unlike theirs, Haskell's type system is complete -- even a function of functions can specify exactly what kinds of functions it uses for inputs and outputs. That makes higher order programming much safer -- and higher-order programming might be the best way to move fast.

Haskell is also astoundingly terse. In Java and C my data type declarations were too long to fit on a page, full of redundancies and boilerplate. In Haskell if you want to make, say, a data type that is either an X or an O (suppose you're writing tic-tac-toe), you could do it in four words: 'data XO = X

O'. (Notice that there's not even a natural way to do that in Java or C, because they don't have sum types; you'd have to make a type that has a flag to indicate whether it is an X or an O. That gets really complicated if they're supposed to have different data associated with it -- but in Haskell, if the X is supposed to carry a float and the O is supposed to carry a string, you just add two more words.)

Pattern matching also helps with terseness. I don't have time even to write the last paragraph so I'll skip this one.

Purity keeps you from tripping up on IO-related errors. It lets you be much more certain that things are working. It also forces you to keep the IO in a thin top-level layer of your program, which might sound like a pain but once it feels natural you'll find yourself moving faster than you could before.

To be sure, Haskell has features that I don't use. But purity, sum types, pattern matching, and the unusually rigorous (it's complete!) type system are all critical to its value to me.

---

[29]

Rust things I miss in C

" Automatic resource management

One of the first blog posts I read about Rust was "Rust means never having to close a socket". Rust borrows C++'s ideas about Resource Acquisition Is Initialization (RAII), Smart Pointers, adds in the single-ownership principle for values, and gives you automatic, deterministic resource management in a very neat package.

    Automatic: you don't free() by hand. Memory gets deallocated, files get closed, mutexes get unlocked when they go out of scope. If you are wrapping an external resource, you just implement the Drop trait and that's basically it. The wrapped resource feels like part of the language since you don't have to babysit its lifetime by hand.
    Deterministic: resources get created (memory allocated, initialized, files opened, etc.), and they get destroyed when they go out of scope. There is no garbage collection: things really get terminated when you close a brace. You start to see your program's data lifetimes as a tree of function calls.

...

Generics

Vec<T> really is a vector of whose elements are the size of T. It's not an array of pointers to individually allocated objects. It gets compiled specifically to code that can only handle objects of type T.

...

Traits are not just interfaces ((section moved to ootTypeNotes6))

...

Slices

I already posted about the lack of string slices in C and how this is a pain in the ass once you get used to having them.

...

Modern tooling for dependency management

Instead of

    Having to invoke pkg-config by hand or with Autotools macros
    Wrangling include paths for header files...
    ... and library files.
    And basically depending on the user to ensure that the correct versions of libraries are installed,

You write a Cargo.toml file which lists the names and versions of your dependencies. These get downloaded from a well-known location, or from elsewhere if you specify.

You don't have to fight dependencies. It just works when you cargo build.

...

Tests

C makes it very hard to have unit tests for several reasons:

    Internal functions are often static. This means they can't be called outside of the source file that defined them. A test program either has to #include the source file where the static functions live, or use #ifdefs to remove the statics only during testing.
    You have to write Makefile-related hackery to link the test program to only part of your code's dependencies, or to only part of the rest of your code.
    You have to pick a testing framework. You have to register tests against the testing framework. You have to learn the testing framework.

In Rust you write

  1. [test] fn test_that_foo_works() { assert!(foo() == expected_result); }

anywhere in your program or library, and when you type cargo test, ...

Documentation, with tests

Rust generates documentation from comments in Markdown syntax. Code in the docs gets run as tests. You can illustrate how a function is used and test it at the same time:

/ Multiples the specified number by two / / ``` / assert_eq!(multiply_by_two(5), 10); / ``` fn multiply_by_two(x: i32) -> i32 { x * 2 }

Your example code gets run as tests to ensure that your documentation stays up to date with the actual code. Hygienic macros

...

No automatic coercions

...

No integer overflow ((actually i think Rust only has overflow checking in debug mode, not in production mode. But oot will have it))

...

Generally, no undefined behavior in safe Rust

In Rust, it is considered a bug in the language if something written in "safe Rust" (what you would be allowed to write outside unsafe {} blocks) results in undefined behavior. You can shift-right a negative integer and it will do exactly what you expect.

...

Pattern matching

You know how gcc warns you if you switch() on an enum but don't handle all values? That's like a little baby.

Rust has pattern matching in various places. It can do that trick for enums inside a match() expression. It can do destructuring so you can return multiple values from a function:

impl f64 { pub fn sin_cos(self) -> (f64, f64); }

let angle: f64 = 42.0; let (sin_angle, cos_angle) = angle.sin_cos();

You can match() on strings. YOU CAN MATCH ON FUCKING STRINGS.

let color = "green";

match color { "red" => println!("it's red"), "green" => println!("it's green"), _ => println!("it's something else"), }

You know how this is illegible?

my_func(true, false, false)

How about this instead, with pattern matching on function arguments:

pub struct Fubarize(pub bool); pub struct Frobnify(pub bool); pub struct Bazificate(pub bool);

fn my_func(Fubarize(fub): Fubarize, Frobnify(frob): Frobnify, Bazificate(baz): Bazificate) { if fub { ...; }

    if frob && baz {
        ...;
    }}

...

my_func(Fubarize(true), Frobnify(false), Bazificate(true));

Standard, useful error handling

I've talked at length about this. No more returning a boolean with no extra explanation for an error, no ignoring errors inadvertently, no exception handling with nonlocal jumps.

  1. [derive(Debug)]

If you write a new type (say, a struct with a ton of fields), you can #[derive(Debug)] and Rust will know how to automatically print that type's contents for debug output. You no longer have to write a special function that you must call in gdb by hand just to examine a custom type. Closures

No more passing function pointers and a user_data by hand. Conclusion

I haven't done the "fearless concurrency" bit yet, where the compiler is able to prevent data races in threaded code. I imagine it being a game-changer for people who write concurrent code on an everyday basis.

"

---

elsherbini 1 day ago [-]

I'm a scientist (PhD? student in microbiolgy) that works with lots of data. My data is on the order of hundreds of gigabytes (genome collections and other sequencing data) or megabytes (flat files).

I use the `tidyverse` from R[0] for everything people use `pandas` for. I think the syntax is soooo much more pleasant to use. It's declarative and because of pipes and "quosures" is highly readable. Combined with the power of `broom`,fitting simple models to the data and working with the results is really nice. Add to that that `ggplot` (+ any sane styling defaults like `cowplot`) is the fastest way to iterate on data visualizations that I've ever found. "R for Data Science" [1] is great free resource for getting started.

Snakemake [2] is a pipeline tool that submits steps of the pipeline to a cluster and handles waiting for steps to finish before submitting dependent steps. As a result, my pipelines have very little boilerplate, they are self documented, and the cluster is abstracted away so the same pipeline can work on a cluster or a laptop.

[0] https://www.tidyverse.org/

[1] http://r4ds.had.co.nz/

[2] http://snakemake.readthedocs.io/en/stable/

reply

nonbel 1 day ago [-]

Sometimes I think I'm the only one who isn't really a fan of the tidyverse. I've found it slower, more prone to dependency issues, more prone to silent errors, and less well documented than most R packages (ie most of what you find on CRAN).

reply

in9 1 day ago [-]

Dependency management, in my opinion, is one of the problems in the R ecosystem. The lack of name spaces when calling functions has made the community have many little packages that only do one thing on you are not really sure where it was actually used, unless you know the code and the package.

An example is the janitor::clean_names function I like to use for standardizing the column names on a data.frame.

However, the tidyverse is really serious in terms of api consistency and functional style, with pipes and purrr's functionalities. The unixy style of base R is unproductive in terms of fast iterating an analysis. Also, the idea of "everything in a data frame" (or tibble, with list columns and whatnot) together with the tidy data principles really takes the cognitive load off to just get things started.

reply

---

amasad 47 minutes ago [-]

Excuse me if this question is too basic but how does a purely functional package manager work with a side-effectful package installation?

In Python (or basically any other language package manager) you can run arbitrary scripts (post-install etc) and so this doesn't lend itself nicely to a reproducible, functional approach.

reply

nextos 6 minutes ago [-]

A quick and overly simplistic explanation is that all inputs (source code, package dependencies or post-install scripts) used to build a package are employed to compute a unique hash.

Then the result of building a particular package is installed in /nix/store/hash-packagename. And this package links to other packages in the nix store using precise hashes. There is no dynamic linking. So the result is referentially transparent. A particular hash is guaranteed to correspond to the same package version, built in the same way and linked to the same dependencies. Furthermore, installing new package versions or modified versions of a package wont overwrite old ones, as hashes are different.

The same concept applies to a whole system setup, which is identified by a hash computed using all options that configure the system plus all packages available in your environment.

reply

---

QuickBasic? modern variant QB64:

[30]

see example code with a DO..UNTIL loop at: https://hackadaycom.files.wordpress.com/2018/02/qb64_feat.png?w=566&zoom=2

another example:

' Init screen SCREEN _NEWIMAGE(800, 600, 32)

' Load files menubg& = _LOADIMAGE("splash.png") menufont& = _LOADFONT("font.ttf", 30) theme& = _SNDOPEN("theme.mp3", "SYNC,VOL")

' Set theme volume, start playing _SNDVOL theme&, 0.3 _SNDPLAY theme&

' Load font _FONT menufont&

' Show full screen image _PUTIMAGE (0, 0), menubg&

' Say hello PRINT "Hello Hackaday!"

ashleyn 12 hours ago [-]

Qbasic is something that simply has no modern-day parallel.

It started very simple, PRINT and GOTO were enough to get you started. It had the tools to expand along as you learned, and a great community of amateurs who would make cool games with it. Eventually, you'd grow out of it and try your hand at C, because by that point you were encouraged to learn so much (and with a gentle curve) that you exhausted its capabilities. It was an excellent educational tool, an excellent way to get kids "hooked" into "real programming".

Learning programming today isn't quite as simple as it was with Qbasic. It involves setting up a large toolchain, and for a beginner, the experience is daunting as the language community reviled both anything imperative and unstructured. There's no easy way to do anything other than text from the command line, which makes for some thoroughly uninteresting demos and examples. Qbasic wrapped an IDE and graphics library around the language in an all-in-one solution; it was so cool seeing colourful, pseudo-graphical demos like Nibbles or Gorillas. If a beginner wanted to do anything "cool-looking" today, this same beginner programmer would need to set up render to texture on an OpenGL? surface. Yuck! And so they lose interest, believing programming is the mysterious and complex domain of gods.

This market did not go away. I don't quite think QB64 fills it, being little more than a nostalgic, artistic homage to Qbasic. If someone were to design a programming language with a learning curve as gentle as Qbasic, and wrap a simple IDE and graphics library around it, all while modernising some of its more dubious qualities...you'd have a real shot at recapturing the magic for a new generation of young programmers.

reply

mcphage 10 hours ago [-]

> If someone were to design a programming language with a learning curve as gentle as Qbasic, and wrap a simple IDE and graphics library around it, all while modernising some of its more dubious qualities...you'd have a real shot at recapturing the magic for a new generation of young programmers.

They did, years ago. It’s called “Processing”, and it’s very popular. It’s also used by a lot of artists to let them make active or interactive art pieces without needing to be an experienced programmer.

reply

vram22 9 hours ago [-]

Cool! I haven't tried out Processing, but I have tried out the JavaScript? port of it (processing.js), initially created by John Resig, who also created JQuery.

vitoralmeida 10 hours ago [-]

PICO-8 (https://www.lexaloffle.com/pico-8.php) is a good alternative to a very simple system with a focus on game programming.

It captures that vibe of an old computer system (where you turn it on and are ready to program immediately) with simplicity and an awesome community to learn from.

reply

smogcutter 11 hours ago [-]

Closest thing I can think of today is https://processing.org. Processing has some serious flaws, but it's an all in one IDE with a library that makes it extremely easy to draw to the screen.

reply

simonh 4 hours ago [-]

Every Windows computer comes with VBScript that’s just as streightfirward and a lot more powerful. Macs have AppleScript? and Python among others. Even on the iPad and iPhone there are quite a few simple, powerful options like Codea and Pythonista. There are loads of great ways to code on modern systems.

What they don’t have is the low level, close to the metal feel the old BASICs has. Even though they were interpreted, you could address hardware such as video memory and devices directly which made them a great gateway to low level programming.

reply

reply

paulasmuth 11 hours ago [-]

> If a beginner wanted to do anything "cool-looking" today, this same beginner programmer would need to set up render to texture on an OpenGL? surface. Yuck! And so they lose interest, believing programming is the mysterious and complex domain of gods.

Or just use the web platform: JavaScript?, HTML5 Canvas API, WebGL?, all of that. I think it's pretty close in terms of getting a pretty (motivating) result quickly without much prior experience.

(Unrelated, but I can also remember writing the first lines of code of my life in qbasic on a dos machine in ~1998; the blue screen definitely invokes childhood memories!)

reply

pvg 11 hours ago [-]

The conceptual load of these things is much, much higher than QBasic, though, they're not nearly as easy to 'just use'. There's this:

    SCREEN 13
    PSET (1,1), 43

And then there's this.

https://stackoverflow.com/questions/4899799/whats-the-best-way-to-set-a-single-pixel-in-an-html5-canvas

reply

evincarofautumn 1 hour ago [-]

My preferred term for this is “zero to pixels”. QB’s was very low—basically unmatched by most modern environments, which require considerably more setup... reply

...

((QuickBasic?)) was the perfect first language. It's so easy to play little sounds or draw graphics, and the environment is basically non-existent so setup is easy. So much fun!

reply

merolish 11 hours ago [-]

Very much agreed on the ease of graphics and sound, and its influence on me as a student -- I wrote a version of Pong in about 250 lines back in the 90s. A single line of music was as simple as PLAY and a string language to specify duration and pitch. SCREEN followed by LINE/CIRCLE/PAINT and you had graphics. GET and PUT gave you sprites. Some of the fine people on Prodigy provided their amateur games and tutorials.

If there's a modern equivalent I'd be interested to know.

reply

digi_owl 11 hours ago [-]

QB64 basically is.

From the article you can do everything Qbasic could, but also load PNGs (_LOADIMAGE), fonts (_LOADFONT) and MP3s (_SNDOPEN).

reply

Falkon1313 6 hours ago [-]

...

You could really only count on someone having an 80x25 text display (usually 16 color but might be monochrome), a keyboard for input, and a floppy drive for storage. Once you learned to do 4 simple things - position a cursor, print text, read keyboard input, and read and write to the drive, you could do almost anything that any professional software could do. There might be a parallel port with a printer or a serial port with a modem, so learning to read and write those ports gave you more options, but again that was simple.

...

---

anticnstrctv 1 day ago [-]

Does anyone else feel that there's a huge hole in the UI world?

I like the electron/react/react native ecosystem - I really do, and this is coming from someone who dislikes javascript a priori. But is HTML/CSS/JS really the best we can do for desktop and mobile applications? I know responsive-cross platform saves money for a lot of startups and orgs, and I'm not saying the web stack doesn't have it's place.

I frequently imagine something that takes the best ideas from react/redux and from other ui and layout frameworks and lets you build something that has consistent, cross platform (desktop and mobile, maybe even web with some kind of compilation pathway to js or webasm), responsive ui, without the huge web stack. Maybe Python? It's got lots of competent developers in the market to support it. Maybe Rust or Go, if something lower-level was desired. Or language-agnostic, though sticking with one might be valuable.

reply

kodablah 1 day ago [-]

Yes, I think we all feel it. Every UI lib has something wrong: C++ sucks and is hard to iface w/ other langs (Qt, wxWidgets), not enough widgets (IUP, libui), not native enough in my OS (GTK), carries a full browser (Electron, nw.js), carries a full JVM (JavaFX?, Swing), etc, etc. And then every subreddit and community always has people asking what's the best UI lib for language X. We all travel back to awesome-lang-X lists' GUI section every so often hoping someone will finally fix it.

Qt or wxWidgets, make a supported C iface (I am aware of some work on both fronts, e.g. wxc that was used by wxHaskell I think?). IUP and libui, thicken your libs (libui guy just got permission from his new company to get back working on the lib, yay). World, don't hate non-native-OS-widgeted apps. Browser vendors, stop being so tunnel-visioned into your my-OS-only, non-embeddable, or must-be-multiprocess shit. And let us trim some fat off at compile time please (waiting on you Servo, but please stay lean and single-process for embeddability). JVM AOT...we're waiting, we know it's coming.

And no, I would rather not have an interpreted (or dynamic) language if I can help it.

Edit: To clarify before a ton of responses pick apart my specific criticisms, I'm making these points to show there is a void. I could poke a ton more holes in these libs and more (I have used most), but it's hardly worth arguing the nuances here.

reply

oblio 1 day ago [-]

The browser is a cross platform UI, yes.

But is it on-par with native offerings? Where are the standardized components? How many years did it take to get a simple grid layout present in Tk back in 1991? Where's the fool-proof GUI editor? How long did it take for it to have a date input?

The web had to fight its way to where it is now. And that's why it has so many scars (besides the self-inflicted DOM and Javascript before strict mode and ECMAScript 5).

Regarding the OS vendors, well, yes, kernels, tools, basic apps. Also not all applications are graphical.

reply

barrongineer 1 day ago [-]

With Java 9 jigsaw and jlink you can now build fairly light JavaFX? apps. The entire jre no longer has to be bundled. Only the pieces you're using.

reply

TJSomething 1 day ago [-]

Looks like that's about 45 MB [1]. For comparison, Electron is about 80 MB. Qt starts at 50 MB depending on what you need, but a few years ago I got it down to 16 MB with UPX and a customized ICU. Gtk starts at ~20 MB and compresses to about half that with UPX. You could probably get those a bit smaller if your license permits static linking, but then you have to write C++.

[1] https://steveperkins.com/using-java-9-modularization-to-ship...

reply

hugi 1 day ago [-]

I recently started experimenting with JavaFX?. I was pleasantly surprised how nice it is, and how easy it is to package application bundles.

reply

zserge 1 day ago [-]

I agree with you, and I tried my best to change things.

A few years ago I made Anvil - a React-ish framework for Android for native UI (reactive, declarative, works best with Redux) and have been using it in many production apps as long as I was doing Android development. But it still is a niche product, most developers prefer visual UI editor and whatever is the official Google way to do Android apps. Still, I'm very proud of Anvil.

https://github.com/zserge/anvil

Then, once I had to deal with desktop apps I figured out that HTML/CSS/JS seems to be the best option for custom (no native widgets) and modern (animations, shadows, nice fonts etc) UI. But Electron was a heavy beast, so I made a tiny wrapper over native browser engines (webviews). So far I'm very happy with it, it's much lighter and smaller than electron. And it's very convenient to use from C++ or Go. Some kind people added Rust and Nim support, I'm now working on Python and Node.js bindings. So yeah, it's pretty much language-agnostic, although it's still web UI.

I really hope it can take it's niche for small desktop apps that don't need the full power of Electron, yet need to have a good look.

https://github.com/zserge/webview

reply

mhd 1 day ago [-]

I'm not the biggest fan of this sed reinvention of PostScript? we call the browser stack, but my main problem with Electron and the like isn't this, but their design target: Making desktop apps look and feel like web apps.

We're throwing away all kinds of HIG achievements since 1984, for the sake of pretty colors and superfluous transitions. Not too long ago, a considerable effort was spent to make web apps behave like desktop apps, now the inverse is true.

I blame the rise of "UX", just like "DevOps?" is too blame for needless infrastructure.

reply

mateuszf 1 day ago [-]

It's not that simple. You can go three routes:

1. Make the apps look pixel perfectly the same on all platforms - that's what the Java Swing did. Result - all the apps feel "alien" and not native for the users.

2. Make all the apps look and behave native and use UX patterns consistent to the given platform. Result - you will have to program them differently.

3. Somewhere in between - all you get is mediocre GUI with not very pleasant development model.

So the problem is not with the GUI toolkits but with inconsistent UX paradigms between the platforms / different input models, interaction models, etc.

reply

pas 1 day ago [-]

Alien would be okay, were it not ugly.

Swing is unusable and ugly.

Just look at and try to use a Swing file picker.

...

Anyway, I'd gladly program them differently for closer to native experience, if it were easy to program. Currently it's either Qt (C++) or GTK (C), or completely useless/out-of-date bindings.

And thus any glorified WebView? reigns supreme.

reply

seba_dos1 1 day ago [-]

Python bindings for both Qt and GTK+ are really, really good. For other languages it sure is hit-or-miss though.

reply

Aelius 1 day ago [-]

I agree with all the complaints about electron performance, but the thing that really gets me is how horribly developers fail at responsive design, with a toolset that is probably the most suited for responsive design since terminals.

How the hell do slack, discord, and gitter STILL not collapse their sidebars when you resize the containing window to be tall and narrow? It makes them unusable in a tiling WM, and it's generally disrespectful of screen real estate in any WM.

This is the price of easily accessible development. We're doomed to relive past mistakes in UI, as new developers are blissfully unaware of lessons learned by existing frameworks.

Many people here are saying electron is popular because of a lack of sane, consistent toolkits across each OS. There's also security to think of too. Electron is likely very insecure relative to native software, but the responsibility is purely on the electron devs: freeing the app developer of that burdon. Electron rose in an environment with no competition in that regard. But now, Windows has UWP, and Mac has their own app platform. I sincerely hope we see these more universally leveraged, such that developers will stop abusing the web for app development.

---

i just checked if Termux on my Android Pixel has 80x24 rows and columns of text.

In landscape mode it has 124x26. In portrait, it has 60x45. So it has at a minimum 60x26, not 80x24. But otoh, assuming landscape mode is available, that's more than 80x24. So i think it's probably safe to assume that any modern terminal will have at least 80x24 rowsxcolumns.

---