the following articles talks about various vulnerabilities in common dev practices that we should fix/watch/discourage:
"I’d see it in your source on GitHub?!
Your innocence warms my heart.
But I’m afraid it’s perfectly possible to ship one version of your code to GitHub? and a different version to npm.
In my package.json I’ve defined the files property to point to a lib directory that contains the minified, uglified nasty code — this is what npm publish will send to npm. But lib is in my .gitignore so it never makes its way to GitHub?. This is a pretty common practice so it doesn’t even look suspect if you read through these files on GitHub?.
This is not an npm problem, even if I’m not delivering different code to npm and GitHub?, who’s to say that what you see in /lib/package.min.js is the real result of minifying /src/package.js?
So no, you won’t find my nasty code anywhere on GitHub?. I read the minified source of all code in node_modules!
OK now you’re just making up objections. But maybe you’re thinking you could write something clever that automatically checks code for anything suspicious.
You’re still not going to find much that makes sense in my source, I don’t have the word fetch or XMLHttpRequest? anywhere, or the domain that I’m sending to. My fetch code looks like this:
“gfudi” is just “fetch” with each letter shifted up by one. Hard core cryptography right there. self is an alias for window.
self['\u0066\u0065\u0074\u0063\u0068'](...) is another fancy way of saying fetch(...).
The point: it is very difficult to spot shenanigans in obfuscated code, you’ve got no chance.
(With all that said, I don’t actually use anything as mundane as fetch, I prefer new EventSource?(urlWithYourPreciousData) where possible. That way even if you’re being paranoid and monitoring outbound requests by using a serviceWorker to listen to fetch events, I will slink right by. I simply don’t send anything for browsers that support serviceWorker but not EventSource?.) " -- [1]
btown 1 day ago [-]
One of the biggest problems here is that there is no “chain of custody” from Github source to uploaded NPM module; otherwise one of the developers using the malicious package could have audited the source code before including it in their own code. ‘npm publish’ would ideally insist on reproducible builds, enforce this by minifying or compiling packages itself, and finally encourage the community to always audit the code associated with a module. Of course, people are lazy, NPM has no incentive to incur that server and engineering overhead, and someone could sneak in code anyways with a minor version update... There’s no clear solution here, and I think the only thing keeping up this house of cards is that there are much easier ways for black hats to make money.
reply
djsumdog 1 day ago [-]
Even if you had that, no one is going to inspect all that code. npm is a cluster-farkle of insane amounts of packages.
The whole point of the article is you should implement CSP.
reply
((note: 'CSP' is a web thing called 'content security policy'. The post author also recommends not running third-party code on pages that collect passwords or credit-card numbers))
JonathonW? 21 hours ago [-]
I have a fairly simple Node project at work; it pulls in nine runtime dependencies, plus 13 development-time dependencies (most of those are babel or eslint-related).
Assuming none of those are pulling shenanigans like mentioned in the article (distributing different code than in their source repositories, or deliberately obfuscating malicious code), it's not completely unreasonable for me to go through and audit my direct dependencies. But, since the Javascript standard lib is crap, all of my direct dependencies have their own large pile of dependencies, which themselves depend on a bunch of stuff, and so on.
By the time it's all said and done, my "simple" Node project pulls in several hundred dependencies (I didn't go through and count, but my 'yarn.lock' on that project has ~4200 lines). I can't audit all of that code.
(This is particularly bad in Node and Javascript, but applies to other languages too. I don't think anyone's ever fully audited all of our Nuget dependencies, or Python dependencies... fortunately, those both tend to be more self-contained, so at least we know what we're getting there.)
reply
jmadsen 1 day ago [-]
I'm more of a back-end dev who doesn't know all the ins and outs of the actual software used - can someone explain to me why this is an npm problem, and not an excessive dependencies problem?
I thought npm was simply a package manager - I don't see anything in the article that is specific to npm, except he happens to say that word.
reply
ufmace 4 hours ago [-]
It's kind of an excessive dependencies problem, except exacerbated by two things. One is Javascript's poor stdlib. This means that not only are you tempted to include lots of little packages to do basic things, but so are all of the big packages that you include to do big things for you, and all of the packages they include, etc. Often there are a bunch of different packages for doing the same basic things, and nobody agrees on which one, so you may end up with 5 different packages that do the same thing required by various packages you use.
Two is that much of it is expected to be served to the browser, so it's minified. Who audits that the minified code is actually the same as the published Github code?
At least in Ruby and Python, the code from Rubygems/Pip should exactly match that version on Github. Not that anyone necessarily audits that either, but at least it's easier.
reply
---
elihu 1 hour ago [-]
Yes, and more than that we need a good, better thought-out modern successor to POSIX-type interfaces. For instance: I think a process ought to be able to have more than one current working directory and possibly more than one user-ID at a time. It should have the option to insert data in the middle of a file without having to manually shift the rest down. Shell scripts should be able to interact with the filesystem via transactions that can be rolled back if anything fails. Programs should be able to have typed input and output, checked by the shell and/or OS, which could also enable command-line tab-completion to search installed programs for any that match a desired type.
I have a bunch of other random gripes with POSIX-style OS interfaces and find it a bit frustrating that these interfaces haven't changed much in decades and seem to have attained a lot of inertia of the "we do it this way because we've always done it this way" kind.
reply
---
Aside from the almost completed features in the pipeline, I only see procedural macros (macros 2.0) and whatever happens to make async code easier to write[1] really impacting day-to-day code for most people.
[1] I'd really like to see F# Computation Expressions instead of async/await. I know the language experts have said Haskell-like do notation doesn't work in Rust but I'm not sure if the F# tweaks would make it work or not.
reply
bjz_ 5 hours ago [-]
> [1] I'd really like to see F# Computation Expressions instead of async/await. I know the language experts have said Haskell-like do notation doesn't work in Rust but I'm not sure if the F# tweaks would make it work or not.
I'd prefer a systems-y, zero-cost take on algebraic effects, similar to what OCaml is going to get. Could be much more extensible, and open things up to annotating whether functions panic or not, access global state, etc. Alas it's still a tricky research problem, even after all these years. There were some nice discussions from ICFP here - the comments strayed into talking about how effects might be implemented without a GC: https://www.youtube.com/watch?v=DNp3ifNpgPM
reply
lobster_johnson 39 minutes ago [-]
Is there a paper or writeup anywhere on what the algebraic effects system coming to OCaml is going to be like? (Does it have a name that can be googled?)
reply
grayrest 22 minutes ago [-]
It's part of the Multicore OCaml [1] effort.
[1] https://github.com/ocamllabs/ocaml-multicore/wiki
reply
jeremyjh 5 hours ago [-]
You can already get “Monad comprehensions” through the mdo and mdo-futures crates.
reply
--- " ...I would like to see a garbage-collected Rust. Take away the borrow checker, and you still have a modern language with UTF-8 support out-of-the-box, algebraic data types, pattern matching, a focus on performance, and great tooling (cargo + rustup = <3).
OCaml almost fits the bill (Rust is inspired by OCaml after all), but the tooling around it is lacking to put it mildly. "
dom96 8 hours ago [-]
This sounds a lot like Nim to me. Give it a go if you haven't already. It's a systems programming language that primarily uses a garbage collector.
reply
currymj 8 hours ago [-]
rust at one point had garbage-collected references. And there's still Rc/Arc types for reference counting.
I get what you're saying though. Another poster mentioned Swift and indeed Graydon Hoare, Rust's creator, is now working on Swift at Apple. And I believe some kind of notion of borrow checking/lifetimes is supposed to be coming to Swift in the future?
reply
pjmlp 8 hours ago [-]
Yes, the initial support is already there in Swift 4, it is called enforced exclusive access to memory in Apple documentation.
reply
mmirate 5 hours ago [-]
That's easy, just change your program so that every value of type T is now a value of type Arc<Mutex<T>>.
reply
weberc2 1 day ago
parent | flag | favorite | on: Rust in 2018: easier to use |
I have a similar desire. OCaml and others push heap-allocated reference types by default (limiting your options for controlling allocations), and many have weak tooling and library support. F# looks neat, but it seems to have a lot of baggage relating to C# interop. F# also doesn't compile static binaries yet. I think I decided the best shot is to build a language that compiles to Go, since it has the right semantics, great libraries and tooling, and a world-class runtime (GC, painless async IO, lightweight thread scheduler for real parallelism). Obviously this is still a huge effort and probably a pipe dream, but its the path of least resistance to get a "Rust with GC".
...
yen223 11 hours ago [-]
I also wished the ReasonML? folks started from scratch, instead of inheriting OCaml's baggage (no forward references, a plethora of file types to deal with, no UTF8 strings without bringing in an external lib, and so on).
yen223 1 hour ago [-]
> Not sure what you mean by "no forward references"?
I probably got the name wrong, but it's the ability to use a function before it is defined.
In OCaml/ReasonML?, you'd have to use the rec keyword and structure your codebase in a particular way to define mutually-recursive functions. It is a small but noticeable papercut, especially since recursion is so common in a functional language.
reply
bjz_ 5 hours ago [-]
> Not sure what you mean by "no forward references"?
I'm guessing they are talking about having implicit mutual recursion between items in a module, like Haskell has.
reply
Manishearth 11 hours ago [-]
This sounds like Swift :)
(ARC, not tracing GC, but still.)
reply
littlestymaar 7 hours ago [-]
If Arc counts, then it juste sounds like Rust actually ;).
reply
K0nserv 7 hours ago [-]
I dunno if I agree, the borrow checker in Rust requires a lot more active thinking and intervention to get memory right. Objective-C and Swift ARC is really straightforward and there are few gotchas. Kinda like GC.
reply
littlestymaar 7 hours ago [-]
If you use Arc everywhere you just end up with the same behavior than Swift (except for closures I guess).
reply
steveklabnik 31 minutes ago [-]
Small note: Swift's ARC and Rust's Arc are different: Rust's Arc is atomic reference counting, while Swift's is automatic reference counting.
reply
kazagistar 4 hours ago [-]
I would like to see a different GC rust... an ability to integrate into an external GC system that it is embedded in, like Javascript or Java or whatever else. In other words, `Gc<_>` would mean "owned by the other runtime". I think there was a proposal like this, though I am not sure what happened to it.
reply
---
"
Now, we are going to write some library code. We create the files src/common/stdlib.c and src/common/stdio.c and corresponding header files.
In stdlib.c, we define the functions memcpy and bzero, as these will come in handy later, and we define itoa (integer to ascii) to make debugging easier.
In stdio.c, we define getc, putc, gets and puts as general purpose IO functions. We do this even though uart.c had uart_putc and uart_puts because later we are going to want to swap out uart_putc for a function that renders text to an actuall screen, and it will be easier to replace one call to uart_putc here than many possible places. "
e also writes a malloc and a free (and a mem_init). 4k pages. e also reserved 1 MB for the kernel heap.
https://jsandler18.github.io/extra/atags.html
" The Atags is a list of information about certain aspects of the hardware. This list is created by the bootloader before our kernel is loaded. The bootloader places it at address 0x100, and also passes that address to the kernel through register r2. If you look at the function signature of kernel_main, void kernel_main(uint32_t r0, uint32_t r1, uint32_t atags), you can see that the atags pointer is the third argument.
The Atags can tell us how large the memory is, where the bootloader put a ramdisk, what is the serial number of the board, and the command line passed to the kernel via cmdline.txt
An Atag consists of a size (in 4 byte words), a tag identifier, and tag specific information. The list of Atags always starts with the CORE tag, with identifier 0x54410001, and ends with a NONE tag, with identifier 0. The tags are concatenated together, so the next tag in the list can be found by adding the number of bytes specified by the size to the current Atag’s pointer. "
http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#appendix_tag_reference
Table 3. List of usable tags Tag name Value Size Description ATAG_NONE 0x00000000 2 Empty tag used to end list ATAG_CORE 0x54410001 5 (2 if empty) First tag used to start list ATAG_MEM 0x54410002 4 Describes a physical area of memory ATAG_VIDEOTEXT 0x54410003 5 Describes a VGA text display ATAG_RAMDISK 0x54410004 5 Describes how the ramdisk will be used in kernel ATAG_INITRD2 0x54420005 4 Describes where the compressed ramdisk image is placed in memory ATAG_SERIAL 0x54410006 4 64 bit board serial number ATAG_REVISION 0x54410007 3 32 bit board revision number ATAG_VIDEOLFB 0x54410008 8 Initial values for vesafb-type framebuffers ATAG_CMDLINE 0x54410009 2 + ((length_of_cmdline + 3) / 4) Command line to pass to kernel
then they implement gpu_putc, which prints a character to the screen hardware.
then they implement interrupts
https://jsandler18.github.io/extra/interrupts.html
" This set of addresses, also known as the Vector Table, starts at address 0. Below is a table that describes each exception Address Exception Name Exception Source Action to take 0x00 Reset Hardware Reset Restart the Kernel 0x04 Undefined instruction Attempted to execute a meaningless instruction Kill the offending program 0x08 Software Interrupt (SWI) Software wants to execute a privileged operation Perform the opertation and return to the caller 0x0C Prefetch Abort Bad memory access of an instruction Kill the offending program 0x10 Data Abort Bad memory access of data Kill the offending program 0x14 Reserved Reserved Reserved 0x18 Interrupt Request (IRQ) Hardware wants to make the CPU aware of something Find out which hardware triggered the interrupt and take appropriate action 0x1C Fast Interrupt Request (FIQ) One select hardware can do the above faster than all others Find out which hardware triggered the interrupt and take appropriate action "
" Pending registers indicate whether a given interrupt has been triggered. These are used in order to determine which hardware device triggered the IRQ exception. Enable registers enable certain interrupts to be triggered by setting the appropriate bit "
" The Raspberry Pi has 72 possible IRQs. IRQs 0-63 are shared between the GPU and CPU, and 64-71 are specific to the CPU. The two most important IRQs for our purposes will be the system timer (IRQ number 1) and the USB controller (IRQ number 9). "
they provide a function register_irq_handler. they setup the system timer peripheral.
https://jsandler18.github.io/tutorial/process.html
each process gets a Process Control Block (PCB)
" typedef struct pcb { proc_saved_state_t * saved_state; Pointer to where on the stack this process's state is saved. Becomes invalid once the process is running void * stack_page; The stack for this proces. The stack starts at the end of this page uint32_t pid; The process ID number DEFINE_LINK(pcb); char proc_name[20]; The process's name } process_control_block_t; "
note: proc_saved_state_t is just the saved contents of each of the registers. DEFINE_LINK is the link for a linked list.
they create a "list of processes that want to run. This is called the Run Queue". They create a scheduler which runs every 10ms (using the system timer interrupt). They use round robin scheduling. They export a 'void create_kernel_thread(kthread_function_f thread_func, char * name, int name_len) {'.
The next part, not yet written, is titled "Locks".
actually this whole thing was a good read/skim. i added 'part 9: locks and on' to ootToReads.
---
in Rust, there is some trait (interface) called 'send' that allows one to send 'owned' data structures from one 'execution context' (task/thread) to another (so i'm guessing this is for using move semantics to transfer something from one thread to another?)
---
wiremine 6 hours ago [-]
I'm a fulltime IoT? software consultant, and I wish we'd see more of these initiatives. A few thoughts:
1. The problem isn't the transport layer, it's the application layer. The transport layer is mostly solved via MQTT, COAP, Thread, etc. Sure, we can improve there, but the real problem is the application layer. So I applaud Mozilla's attempt to bring something into this space.
---
4. JSON is a non-starter long-term: it sucks for small devices. They need a binary format that is easy to parse.
ASN.1?
https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
> ASN.1 is similar in purpose and use to protocol buffers and Apache Thrift, which are also interface description languages for cross-platform data serialization. Like those languages, it has a schema (in ASN.1, called a "module"), and a set of encodings, typically type-length-value encodings. However, ASN.1, defined in 1984, predates them by many years. It also includes a wider variety of basic data types, some of which are obsolete, and has more options for extensibility. A single ASN.1 message can include data from multiple modules defined in multiple standards, even standards defined years apart.
reply
jnwatson 4 hours ago [-]
Asn.1 is great, and I see a lot of folks relearning the hard-earned lessons of it. The only problem is that there are no good open source implementations.
reply
ofek 2 hours ago [-]
https://github.com/wbond/asn1crypto
reply
kevin_thibedeau 2 hours ago [-]
All of the open source TLS libs have to support enough of ASN.1 to process certificates.
reply
mkj 5 hours ago [-]
cbor looks better for small devices?
reply
---
wiremine 6 hours ago [-]
I'm a fulltime IoT? software consultant, and I wish we'd see more of these initiatives. A few thoughts:
1. The problem isn't the transport layer, it's the application layer. The transport layer is mostly solved via MQTT, COAP, Thread, etc. Sure, we can improve there, but the real problem is the application layer. So I applaud Mozilla's attempt to bring something into this space.
2. Bootstrapping this will require a substantial number of hardware vendors to sign on, both at the edge and at the hub layers. IMHO this is why Google Weave [1] never took off in it's original incarnation. Bootstrapping this like they did with web stuff isn't enough, because this isn't the web.
3. Devices are only part of the problem: We need a software services layer, too. Think time services, IFTTT-like orchestrators, media services, etc.
4. JSON is a non-starter long-term: it sucks for small devices. They need a binary format that is easy to parse.
5. Request-Response isn't the right pattern for most use cases.
6. The Property/Action/Event concept is a solid start.
7. For the love of everything holy, add versioning!
[1] http://internetofthingsagenda.techtarget.com/feature/Google-...
Edit: Grammar.
---
geofft 8 hours ago [-]
Speaking as a developer who primarily writes Python and secondarily C, Rust, C++, bash, and others as necessary, who has in fact worked on a large-scale C++ UI project in Qt on the Linux desktop running alongside PyGTK? UIs, and who doesn't actively write JavaScript?, I have to say that JavaScript? is a really good choice of language, especially for UI, because it has a strong bias towards event-based programming and callbacks instead of blocking code and threads. If you're going to pick the right tool for the job of UI, I'd put JS a bit ahead of Rust (because of immaturity; that is likely to change real soon), which I'd put a lot ahead of C++ (because you're going to screw up threading) a little bit ahead of C and Python (because you're definitely going to screw up threading). If you're interested in rapid prototyping, I'd move Python up several notches - but not past JS, which is what I reach for these days for throwaway prototypes if I think I know enough JS to pull it off.
(If someone has a high-quality way to use asyncio or Twisted with the Qt event loop, I'd probably bump Python a bit higher.)
reply
---
since i think the perfect writing system wouldn't have uppercase/lowercase, maybe Oot should not use capitalization for anything?
---
3 "what's new in python 3" thingees (i already read and took notes on them):
---
how much data can you fit in a UDP packet?
The IPv4 max payload size for UDP is 65507 bytes [5] (65,535 - 8 byte UDP header − 20 byte IP header). However, in reality, you want to use a much lower limit to avoid packet fragmentation ([6] claims that firewalls may drop fragmented packets, and that IPv6 will drop them).
In https://stackoverflow.com/questions/14993000/the-most-reliable-and-efficient-udp-packet-size the top answer suggests assuming a 1500 MTU (btw i think a maxed size ethernet packet payload is around 1500 bytes?) and computing: 1500 MTU - 20 IP hdr - 8 UDP hdr = 1472 bytes. But that answer goes on to note that the minimum MTU size that an host can set is 576, and IP header max size can be 60 bytes, and then computes 576 MTU - 60 IP - 8 UDP = 508.
On https://stackoverflow.com/questions/1098897/what-is-the-largest-safe-udp-packet-size-on-the-internet the top answer suggests 512 "although even that does not leave quite enough space for a maximum size IP header". Other sources also use 512 [7]
Compare to my 1k message size in my 'fabric' idea.
Ed25519 signatures are 64 bytes (512 bits).
So an Ed25519-signed payload of the 'safe' size of 508 would leave 508 - 64 = 444 bytes for signed payload (or, 222 16-bit words).
222 words is greater than 128+64 = 192. So one could imagine a 128 word sub-payload and a 64-word sub-header (you still have 30 16-bit words to spare so this is very conservative).
So, when thinking of data structures that might be sent in one freestanding message (as opposed to 'bulk data' eg a picture that would frequently be sent over many messages), one might try to make sure it fits into 128 words (16-bit words) (or, 256 bytes).
Compare to cache lines, which are frequently 64 bytes (32 16-bit words; 512 bits) -- so 4 cache lines (2k bits) fit in one 128-word subpayload.
But could we go up to 1472? http://www.pcvr.nl/tcpip/udp_user.htm did an experiment in around 1993:
" Fifteen countries (including Antarctica) were reached and various transatlantic and transpacific links were used. Before doing this, however, the MTU of the dialup SLIP link between the author's subnet and the router netb (Figure 11.12) was increased to 1500, the same as an Ethernet.
Out of 18 runs, only 2 had a path MTU of less than 1500. One of the transatlantic links had an MTU of 572 (a value not even listed as a likely value in RFC 1191) and the router did return the newer format ICMP error. Another link, between two routers in Japan, wouldn't handle a 1500-byte frame, and the router did not return the newer format ICMP error. Setting the MTU down to 1006 did work.
The conclusion we can make from this experiment is that many, but not all, WANs today can handle packets larger than 512 bytes. Using the path MTU discovery feature will allow applications to take advantage of these larger MTUs. "
it's also work thinking of stuff like
https://github.com/libcsp/libcsp
" Cubesat Space Protocol - A small network-layer delivery protocol designed for Cubesats"
...
The protocol is based on a 32-bit header containing both transport and network-layer information. Its implementation is designed for, but not limited to, embedded systems such as the 8-bit AVR microprocessor and the 32-bit ARM and AVR from Atmel.
...
The idea is to give sub-system developers of cubesats the same features of a TCP/IP stack, but without adding the huge overhead of the IP header. The small footprint and simple implementation allows a small 8-bit system with less than 4 kB of RAM to be fully connected on the network.
...
Very Small Footprint 48 kB code and less that 1kB ram required on ARM
"
https://github.com/libcsp/libcsp/blob/master/doc/mtu.rst gives some example MTUs for this case: 256 bytes, 200 bytes, 100 bytes
What MTUs are found on the internet? https://blog.cloudflare.com/path-mtu-discovery-in-practice/ says "The minimal required MTU for all IPv6 hosts is 1,280 ( http://www.delaat.net/rp/2012-2013/p55/presentation.pdf says something about 1232?), which is fair. Unfortunately for IPv4 the value is 576 bytes. On the other hand RFC 4821 suggests that it's "probably safe enough" to assume minimal MTU of 1,024."
Note however that 1024 MTU gives you <1024 payload in UDP. So the largest power-of-two that fits in our payload is still 512 bytes here (256 16-bit words). Otoh you could say that's a 512 byte SUB-payload, with a 256 byte sub-header, plus lots of spare room still (enough for a signature certainly). So our data structure max is now 256 words, instead of 128.
In this experiment, the smallest path MTU observed was 1240: https://www.nlnetlabs.nl/downloads/publications/pmtu-black-holes-msc-thesis.pdf . "Remarkable is the fact that the IPV4 minimum MTU of 576 bytes is not used at all."
This guy recommends around 1400: https://stackoverflow.com/questions/2613734/maximum-packet-size-for-a-tcp-connection/3074427#3074427
So it seems to me that a payload size of 1k isn't crazy, even though that's above the RFC 4821 suggestion.
---
" The computers we had practically encouraged you to get into programming right out of the box. You booted into the Basic interpreter, and that was effectively your command line interface to the operating system. Depending on the computer, and the Basic interpreter you had, you might very easily be able to get right into doing some graphics, and maybe making it produce some sounds with just a few commands...
The computers we had had two screen modes. One was character-based, and one was based in bitmap graphics. Both modes were just different views on screen memory, and each represented what was in it differently. " -- [8]
" Wozniak designed the Apple II with some specific ideas in mind. He wanted to improve on the design of the Apple-1, add additional memory and input/output options (the slots), plus he wanted to make it possible to do the game Breakout in software. This gave the parameters for the lo-res graphics, the colors, and the single-bit sound, as well as the game paddle inputs. Had Woz wanted a computer to keep track of database files, or for word processing, likely he would have focused more on the text display (supporting upper and lowercase) and data storage (a more robust software interface for cassette storage). But without the color and sound, it is hard to say whether or not the Apple II would have had as much of an impact on the market as it did. " -- [9]
" Then there is BASIC. I hate to say it, but Apple BASIC sucks compared to Commodore BASIC. granted, Apple at least has graphics commands, I can give them credit for that. But the screen editor is very un-userfriendly. You hit the delete key (positioned where backspace is on the C64) and what do you get? Rather than backing up, it puts some funny character on the screen. What is up with that? Editing lines in BASIC is also very painful. Oh.. and I tried to write a program and found there doesn't seem to be any way to change text colors in BASIC and the GET statement doesn't work like the Commodore version. It actually halts the program until a key is pressed. That makes it impossible to write a program in pure BASIC that does any kind of active routine while waiting on a keystroke. " -- [10]
" The IIe's double hires mode is quite a bit better than the hires mode in that sixteen colors can be used rather than six. Many Sierra games used this mode for the IIe. Then there is the issue of the poor quality sound. It was interesting on what programmers did to work around this problem. The Electronic Duet program allowed two "voices" to be used, though it would halt any other operations, rendering it useless for action games. A similar method was used in the title screens of Prince of Persia and Dark Lord. The Mockingboard sound card was neat, yet it had little support and didn't come close to the quality of the SID chip. " -- [11]
"
Microprocessor CPU: MOS Technology 6510/8500 (the 6510/8500 is a modified 6502 with an integrated 6-bit I/O port) Clock speed: 0.985 MHz (PAL) or 1.023 MHz (NTSC) Video: MOS Technology VIC-II 6567/8562 (NTSC), 6569/8565 (PAL) 16 colors[89] Text mode: 40×25 characters; 256 user-defined chars (8×8 pixels, or 4×8 in multicolor mode); or extended background color; 64 user-defined chars with 4 background colors, 4-bit color RAM defines foreground color Bitmap modes: 320×200 (2 unique colors in each 8×8 pixel block),[90] 160×200 (3 unique colors + 1 common color in each 4×8 block)[91] 8 hardware sprites of 24×21 pixels (12×21 in multicolor mode) Smooth scrolling, raster interrupts Sound: MOS Technology 6581/8580 SID 3-channel[89] synthesizer with programmable ADSR envelope 8 octaves 4 waveforms per audio channel: triangle, sawtooth, variable pulse, noise Oscillator synchronization, ring modulation Programmable filter: high pass, low pass, band pass, notch filter Input/Output: Two 6526 Complex Interface Adapters 16 bit parallel I/O 8 bit serial I/O 24-hours (AM/PM) Time of Day clock (TOD), with programmable alarm clock[92] 16 bit interval timers RAM: 64 KB, of which 38 KB (minus 1 byte) were available for BASIC programs 512 bytes color RAM (memory allocated for screen color data storage)[93] Expandable to 320 KB with Commodore 1764 256 KB RAM Expansion Unit (REU); although only 64 KB directly accessible; REU mostly intended for GEOS. REUs of 128 KB and 512 KB, originally designed for the C128, were also available, but required the user to buy a stronger power supply from some third party supplier; with the 1764 this was included. Creative Micro Designs also produced a 2 MB REU for the C64 and C128, called the 1750 XL. The technology actually supported up to 16 MB, but 2 MB was the biggest one officially made. Expansions of up to 16 MB were also possible via the CMD SuperCPU. ROM: 20 KB (9 KB Commodore BASIC 2.0; 7 KB KERNAL; 4 KB character generator, providing two 2 KB character sets)
Input/output (I/O) ports and power supply " -- [12]
" The Apple II had a clear edge when it came to business applications. The pioneering spreadsheet Visicalc gave the machine an early advantage that it never relinquished to any other 8-bit machine.
When it came to games, things get a bit blurrier. The Apple II had a major advantage when it came to role-playing games, partly because software developers assumed most Apple II owners had two disk drives, and they wrote their games to take advantage of that. Most C-64 software assumed a single-drive machine, even though dual-drive 64s were fairly common. Plus, the faster disk drives on the Apple II made role-playing games more enjoyable.
The C64’s advantage was with arcade-style games because of its 3-voice sound chip and sprite graphics. Fast movement was possible on the Apple II but it required a lot more complex programming, and the Apple II’s beeper couldn’t compare with the 64’s mini-synthesizer. " -- [13]
" Sound IIgs sound stomps on the SID or any 8 bit machine of the day. The Mockingboard upgrade gave Apple IIs very good sound. Too bad it wasn't supported by more titles. Stock Apple II sound sucks. The C64 beats the stock Apple II and more titles support the SID than Mockingboard and IIgs sound combined. " -- [14]
" The Apple II wins, at least out of the box. The Commodore may have better graphical and sound capabilities, but trying to tap into those capabilities in BASIC is easier said than done. Actually, AppleSoft? BASIC doesn't make sound programming all that easy either, but there are a lot of nice commands for creating both lo-res and hi-res graphics. " -- [15]
" The BASIC interpreter is derived from the same codebase, but the Apple version has graphics functionality built-in, which the C64 doesn't. But the C64 version has bitwise logic, which was removed from the Apple version. " -- [16]
"
The C64 had great bang for the buck! And it got used for lots of stuff, despite it's business display limitations. No real 80 column support impacted things. That was a big deal at that point in time. A nicely configured C64 ended up being used for real estate contract stuff and it worked out really well, for the cost. A similar task performed on the Apple ][ literally was no contest! Data access was faster, display capabilities better, software library more feature and title rich, etc... The end product on each varied significantly, with the Apple able to do seriously good quality output, if somebody wanted to pay for that. " -- [17]
" In terms of scientific uses, test measurement, industrial automation, data logging, sensors, interfacing to control systems, I/O, higher end graphics output, plotting, CAD design, etc... an Apple 2 could be fitted with great devices, in the box, and supported as in the box things, due to how the system ROM was written. A C64 just didn't have any of that design vision incorporated into it, and as a result just didn't see those kinds of things being possible. Apple 2 computers were used as development computers regularly too, this due to effective storage capability, robust programming tools and the ability to interface to development type hardware as needed on cards. " -- [18]
" Gaming on an Apple is really interesting! It doesn't have fancy graphics chips, nor sound. Lacking a sound chip really impacted the machine, but not having a graphics chip just didn't to the same degree. No wait state video access and the basic ability to page flip was just enough to keep the machine relevant for most titles, resulting in often surprisingly acceptable games, despite very low expectations overall.
I got my Atari to game on, as did a ton of people, and I programmed it a lot too, as did a lot of people. I got the Apple to get shit done, as did a lot of people. A well equipped Apple produced writing and graphics for me well into the 90's, and by then it was cheap ass to get done and surprisingly effective. The next machine I got that worked out that way happened to be a PC, which mirrored many of the great Apple design decisions. And the PC didn't prioritize it's gaming display capabilities then either, but it was the machine to get shit done.
(which could be said of the 16 bit computers as well, though I personally did not entertain that path, moving to UNIX / SGI instead) " -- [19]
" I don't believe there was any special vision incorporated into the Apple II design. These following features came together synergistically to add up to more than the sum of their parts.
1- Completely open architecture built around cheap TTL components. No custom chips. 2- Fully, and most excellently, documented ROM listings. 3- 8 expansion slots. And built-in A/D converter. 4- A beyond ultra-reliable & elegant disk storage system. Fast, lightweight firmware, parallel operation. 5- Lightning fast text response and immediate reset when demanded.
...
Did you know that when you do disk access on an Apple II, the 6502 must stop what it was doing and take command of the disk? The 6502 controls the stepper motor, the spindle motor, and the data flow to the sequencer. The 6502 sets the tone of the entire operation.
Everything in the Apple II was as close to the bare metal of the TTL logic as possible and yet at the same time remain suitable for consumer use. Little or no firmware to get in the way or slow things down. " -- [20]
"
That the Apple II's design could complete against a later generation was testament to its design. Consider that the Apple II has much much more in common with the dedicated pong-style units and very early arcade motherboards, single-board hobbyist computers like the KIM-1, or the IMSAI and Altair S-100 bus systems; than it does with a C64 and A800.. Individual TTL logic, being the key point here. No custom chips.
Once you get into custom chips, the hardware starts closing up and complexity slows things down. Look what happened to the Amiga! This is a system that collapsed under its own weight seemingly. The machine had great specs on paper, and by all rights and means it should have been able to do arcade-perfect ports of many earlier games. But the damned thing was buried under bloated & buggy firmware trying to manage those custom chips.
We didn't see this problem on the C64 or A800 because the custom chips weren't complex enough. Not yet. They were still simple and "bare metal enough" to be an aid as opposed to a hindrance. The 6502 still had a good level of authority at the clock-cycle level. Now look at the Amiga camp - you had all sorts of shit going on on the bus. Too much police in the street not communicating with each other. The city traffic slows down while each cop radios to base for instructions. Too much "unrelated-to-what-you're-working-on" code has to be executed.
While the likes of this complexity had never been seen in a toy computer before it was still badly implemented, it was only good for advertising fodder and niche applications. These custom chips needed a lot of CPU overhead to keep them sync'd. Now imagine throwing a memory management chip in there (onboard later 68xxx processors), how the fuck can anything get done? The system had barely enough power to boot itself! Oftentimes my shitbox Apple II would outperform the Amiga in untold number of instances.
In contrast, look at the 1st MAC systems. The same philosophy of the II applied. Do everything with basic hardware. Allow efficient bus usage and let the application data flow unhindered. In that environment software can work magic.
Thank god the C64 & A800 didn't bloat up like these early 16-bit machines. Same thing with the VCS, it has a custom chip, the TIA, but it is a low-count simplistic beast not weighing anything down. " -- [21]
" Text density on the screen differed somewhat, but was overall quite similar (with the exception of the VIC-20): computer rows x columns char per screen PET 25 x 40 1000 Apple II 24 x 40 960 TRS-80 16 x 64 1024 Atari 24 x 40 960 VIC-20 23 x 22 506 C64 25 x 40 1000 " -- [22]
" The one final point I want to reiterate is that the Apple II expansion slots provided direct connections to the address/data lines along with many other timing signals and strobes - without firmware interaction. This is also what the IBM PC did and the toy computers did not. " -- [23]
"
I ended up moving right into manufacturing and engineering work. Atari, C64, and CoCo? 3 were good for the soul, programming, gaming, hacking on the chips and sometimes doing quick little projects with the various ports.
But, it was the Apple 2 and PC that provided access to experiences that proved to be worth it, translating right into making good money, which I did in manufacturing and automation. If you had an Apple, and a few contacts, you were in. Same for the PC. ... Using all the business apps paid very well as I had the jump on a lot of people, right from school able to do spreadsheets, various word processing, publishing and even fairly professional quality collateral. The Apple 2 could do postscript output, and that was killer when it came to those things. Helped to pay for some college on my part.
CAD was something very powerful that I first did on the Apple as well, moving rapidly to the PC where it's greater memory space, speed and cheaper / faster storage options made mechanical CAD a reality. Mix in the programming skills, and I sold my first software package for CAD systems, making me enough money to really get a great PC, and that's remained true. For me, the computing hobby has always paid for itself and always will. ... Lots of C64 / Atari users just paid and played and that's fine, but that's also not the typical Apple scene and that's something people should understand today because that difference was very significant and often overlooked in lieu of the sexy games and such we all like so much. " -- [24]
"
I always thought of the Apple II as the last single-board hobbyist computer. S-100's, RCA Cosmac VIP, Kim-1, TRS-80 Model I, IMSAI, Altair, Heathkit, countless others. The Apple II belongs to the same heritage and engineering philosophy as those machines. Sooner or later, a machine would hit on the right combination of features to make it long-lived. And the Apple II happened to be it
Everything designed after it was geared toward the consumer market with a purpose. Features were now being carefully chosen in a price/performance matter with a goal in mind. Remember there was no goal or marketing benchmark set for the Apple II, the joe-blow consumer and technical hobbyist alike were still discovering what a computer was and what one could do with them when the 2 series was born. It is the last of the hobby systems, last of the green-screen terminal type devices. And it guided the transition from basement-dwelling to mass-marketing computers as a practical or fun tool. " -- [25]
" 6 color high res screens are enough to do basically anything. 4 color high res screens aren't, " -- [26]
" 6 color screens, with some high resolution color capability meant for some killer pixel art that is very difficult to reproduce on the machines with custom hardware. That hardware brought more colors, which is a good thing, but the trade-off was not having smaller color dots, and or significant freedom of movement on multi-color scenarios. These differences impacted how games were presented, and the fun part here is exploring that, of which the Apple has a lot to contribute. " -- [27]