proj-jasper-jasperNotes6

https://www.semipublic.comp-arch.net/wiki/Bad,_Good,_and_Middling_Ideas_in_Computer_Architecture

https://www.semipublic.comp-arch.net/wiki/Design_Principles_and_Rules_of_Thumb

--

my notes on katherine coons's notes on tim sweeney's talk

Three Kinds of Code

game simulation (shared state concurrency): software transactional memory numeric computation: implicit thread parallelism (purely functional) shading: implicit data parallelism (SIMD)

  --

Vertex[] Transform (Vertex[] Vertices, int[] Indices, Matrix m) { Vertex[] Result = new Vertex[Indices.length]; for(int i=0; i<Indices.length; i++) Result[i] = Transform(m,Vertices[Indices[i]]); return Result; };

problems: each input may be NULL, Indices May contain indices outside of the range of the Vertex array, so Indices.length could dereference a NULL pointer, Vertices[Indices[i]] might be out of bounds., transform may hang

Dynamic failure in mainstream languages

Solved problems: Random memory overwrites Memory leaks (?)

Solvable: Accessing arrays out of bounds Dereferencing null pointers Integer overflow Accessing uninitialized variables

50% of the bugs in Unreal can be traced to these problems!

Transform{n:nat}(Vertices:[n]Vertex, Indices:[]nat<n, m:Matrix):[]Vertex= for each(i in Indices) Transform(m,Vertices[i])

the only problem left: transform may hang

how might this work?

  --

Dependent types

int nat nat<n

Dependent functions

Sum(n:nat,xs:[n]int)=.. a=Sum(3,[7,8,9])

Universal quantification

Sum{n:nat}(xs:[n]int)=.. a=Sum([7,8,9])

  --

(maybe/option types): Separating the “pointer to t” concept from the “optional value of t” concept

xp:^int xo:?int xpo:?^int

Advantages: You can’t dereference a NULL pointer The compiler can force you to do the appropriate error checks

?int x = lookup(t, k) FOUND(x) -> { use x … } NOTHING(x) -> { display error message of your choice

  --

Comprehensions (a la Haskell), for safely traversing and generating collections

Successors(xs:[]int):[]int= foreach(x in xs) x+1

Now we can’t have an index out of bounds error!

But this ties the Collections object and Iterator interface directly to the language - sacrifice abstraction

A guarded casting mechanism for cases where need a safe “escape”:

GetElement?(as:[]string, i:int):string= if(n:nat<as.length=i) as[n] else “Index Out of Bounds”

  --

Exceptions impose sequencing constraints on concurrent execution.

Dependent types and concurrency must evolve simultaneously

  --

Analysis of Unreal code

Usage of integer variables in Unreal: 90% of integer variables in Unreal exist to index into arrays 80% could be dependently-typed explicitly, guaranteeing safe array access without casting. 10% would require casts upon array access. The other 10% are used for: Computing summary statistics Encoding bit flags Various forms of low-level hackery

How true are these observations for other types of code?

“For” loops in Unreal: 40% are functional comprehensions 50% are functional folds Functional comprehensions (foreach) Functional folds: Operator to encapsulate simple patterns of recursion for processing lists

  --

Accessing uninitialized variables

class MyClass? { const int a=c+1; const int b=7; const int c=b+1; } MyClass? myvalue = new C; What is myvalue.a?

This is a frequent bug. Data structures are often rearranged, changing the initialization order. Lessons from Haskell: Lazy evaluation enables correct out-of-order evaluation Accessing circularly entailed values causes thunk reentry (divergence), rather than just returning the wrong value Lesson from Id90: Lenient evaluation (“An evaluation strategy under which all reducible expressions are evaluated in parallel except inside the arms of conditionals and inside lambda abstractions.”) is sufficient to guarantee this

  --

Evaluation Strategy Lenient evaluation is the right default. Support lazy evaluation through explicit suspend/evaluate constructs. Eager evaluation is an optimization the compiler may perform when it is safe to do so.

  --

Integer Overflow

The Natural Numbers : data Nat = Zero

Succ Nat

Factoid: C# exposes more than 10 integer-like data types, none of which are those defined by (Pythagoras, 500BC).

In the future, can we get integers right?

Neat Trick: In a machine word (size 2n), encode an integer ±2n-1 or a pointer to a variable-precision integer Thus “small” integers carry no storage cost Additional access cost is ~5 CPU instructions But: Array indexes don’t need this encoding Since ~80% of integers can dependently-typed to access into an array, the amortized cost is ~1 CPU instruction per integer operation. How true is this for other application domains?

  --

Reasonable type-system extensions could statically eliminate all: Out-of-bounds array access (comprehensions, dependent types) Null pointer dereference (optional variables) Integer overflow (dependent types with pointer access) Accessing of uninitialized variables (lenient evaluation)

  --

Memory model Garbage collection should be the only option

Exception Model The Java/C# “exceptions everywhere” model should be wholly abandoned All dereference and array accesses must be statically verifiable, rather than causing sequenced exceptions No language construct except “throw” should generate an exception

-- http://www.cs.utexas.edu/~coonske/presentations/fgp.ppt

--

Lazy operations construct a query execution plan

that is, The object being operated on can request to see the whole chain of lazy pending operations at once

--

Scala inner classes enclosing class, outer object, #

--

Scala nullary methods can be postfix (did they mean prefix? )

--

Also, go thru Scala tour and make why Jasper additions

--

Scala

Overrides keyword

--

scala tour comprehensions example is verbose because comprehensions are verbose, pair constructor is verbose, no repr. Why Jasper additions.

--

scala unapply does pattern matching with inverses

--

https://docs.google.com/presentation/d/15-7qFy6URdE7Owi2LitkQI_OHBu1AFWPUwHxgBc-O4E/preview?sle=true#slide=id.g1793dde48_1412

starting at slide 115

(ns jfdi (:use clojure.pprint))

;; so I can use pprint at REPL

jfdi> (+ 3 4) 7 jfdi> (+ 3 4 5 6 7) ;; takes variable arity 25 jfdi> (+) ;; including 0 0 jfdi> (+ 137 (* 32 3 25 13)) 31337

jfdi> (def six 6) ;; define a value

  1. 'jfdi/six jfdi> (defn square [x] (* x x)) ;; fn def.
  2. 'jfdi/square jfdi> (def square (fn [x] (* x x))) ;; alternative / identical jfdi> (square six) 36

jfdi> (defn factorial [n] (if (= n 0) 1 (* n (factorial (- n 1))))) jfdi> (factorial 5) 120 jfdi> (map factorial (range 8)) ;; (range 8) => (0 1 2 3 4 5 6 7) (1 1 2 6 24 120 720 5040)

jfdi> (defn factorial-2 [n] (reduce * (range 1 (inc n)))) ;; chance to show off reduce.

  1. 'jfdi/factorial-2 jfdi> (factorial-2 7) 5040

jfdi> (class 5) ;; Clojure data lives on JVM java.lang.Long jfdi> (class "five") ;; same as Java’s String java.lang.String jfdi> (class true) java.lang.Boolean jfdi> (class \C) java.lang.Character

;; symbols and lists are what code’s made of jfdi> (class '+) clojure.lang.Symbol jfdi> (list '+ 1 2 3) (+ 1 2 3) jfdi> (eval (list '+ 1 2 3)) 6

jfdi> (class :a) ;; string-like, interned, used for map fields. clojure.lang.Keyword jfdi> (def my-map {:white 1 :red 5 :blue 10})) jfdi> (get my-map :red) jfdi> (my-map :red) ;; map *is* a function jfdi> (:red my-map) 5 ;; all 3 return same result. jfdi> (get my-map :orange) nil ;; Java null

jfdi> [1 2 (+ 3 4)] ;; vector [1 2 7] jfdi> #{5 "five" "v" 5.0 "5"} ;; set

  1. {5.0 5 "five" "5" "v"} jfdi> (int-array 1024) ;; prim. arrays
  2. <int[] [I@5a3184d8>

jfdi> (def a (atom 0)) ;; mutable "cell" jfdi> @a ;; or: (deref a) 0 jfdi> (reset! a 38) jfdi> @a 38 jfdi> (swap! a inc) ;; swap! is atomic (preferred) 39

jfdi> (reset! a 0) jfdi> (dotimes [i 1e4] ;; 10k *conceptual* threads. (future (let [x @a] (reset! a (inc x))))) ;; what implicitly happens w/ naive imperative code / bad concurrency. jfdi> @a 9544 ;; concurrency fail jfdi> (reset! a 0) jfdi> (dotimes [i 1e4] (future (swap! a inc))) ;; atomic/correct jfdi> @a 10000 ;; correct

Not shown:

(def db [{:name "Ayla" :gender :female :level 74 :element :fire :badassery } {:name "Crono" :gender :male :level 53 :element :lightning :zombie true} {:name "Frog" :gender :male :level 60 :element :water} {:name "Lucca" :gender :female :level 61 :element :fire :glasses true} {:name "Marle" :gender :female :level 57 :element :water} {:name "Magus" :gender :male :level 70 :element :shadow :hot-sister-issues true} {:name "Robo" :gender :robot :level 54 :element :shadow :theme-song "rickroll"}])

jfdi> (map :name db) ;; SELECT name FROM db ("Ayla" "Crono" "Frog" "Lucca" "Marle" "Magus" "Robo") jfdi> (frequencies (map :gender db)) {:female 3, :male 3, :robot 1} jfdi> (group-by :gender db) {:robot [{:name “Robo”, :level 54, :element :shadow, …}] :female [<Ayla> <Marle> <Lucca>], :male [<Crono> <Frog> <Magus>]}

jfdi> (map :name (sort-by :level db)) ;; ...ORDER BY level ("Crono" "Robo" "Marle" "Frog" "Lucca" "Magus" "Ayla") jfdi> (defn female? [record] (= (:gender record) :female))

  1. 'jfdi/female? jfdi> (filter female? db) ;; ...WHERE gender = female ({:gender :female, :name "Ayla", :level 74, :element :fire, ...} {:gender :female, :name "Lucca", :level 61, :element :fire, …} {:gender :female, :name "Marle", :level 57, :element :water}) jfdi> (map (juxt :name :level) (filter female? db)) (["Ayla" 74] ["Lucca" 61] ["Marle" 57])

jfdi> (map :level (filter female? db)) (74 61 57) jfdi> (reduce + (map :level (filter female? db))) ; aggregation 192 jfdi> (defn average [coll] (/ (reduce + 0.0 coll) (count coll)))

  1. 'jfdi/average jfdi> (average (map :level (filter female? db))) ; summary stats 64.0

Motivation: runtime tracing of functions is a powerful debug technique. It’d require a framework to do this in Java, but in Clojure, a macro suffices!

jfdi> (defmacro tracing [code] `(do (println "The code was: " '~code) (let [ret# ~code] (println "Returning " ret#) ret#)))

2
do sequences multiple forms, returns last value (others ;; used for side effects!)
3
'~code: unquote to insert the code, quote to block ;; run-time evaluation.

jfdi> (tracing (+ 5 6)) The code was: (+ 5 6) Returning 11 11 jfdi> (+ 5 (tracing (* 3 6))) The code was: (* 3 6) Returning 18 23

jfdi> (defn buggy-* [x y] (inc (* x y)))

  1. 'jfdi/buggy-* jfdi> (defn buggy-fct [n] (reduce buggy-* (range 1 (inc n)))) jfdi> (buggy-fct 4) ;; because of bug in 41 ;; buggy-* jfdi> (with-redefs [buggy-* *] (buggy-fct 4)) 24 ;; correct. So buggy-* is the problem.

jfdi> (def buggy-*-root buggy-*) ;; avoid clobber jfdi> (with-redefs [buggy-* (fn [x y] (eval `(tracing (buggy-*-root ~x ~y))))] (buggy-fct 3)) ;; this is slightly insane & in practice there are better ways. ;; but this is something you’d need a bulky, bloated framework to do in e.g. Java! The code was: (jfdi/buggy-*-root 1 2) Returning 3 The code was: (jfdi/buggy-*-root 3 3) ;; as you see, the bug’s here. 3*3 != 10. Returning 10 10

slide 106:

(map #(+ % 2) [2 3 5 7 11])

Want map in Java?

interface Function1 { Object invoke (Object x); {

class AddTwo? implements Function1 { public Object invoke (Object x) { int res = (int)(Integer) x + 2; return res; } }

class Map { public static Object[] invoke (Function1 f, Object[] src) { Object[] dest = new Object(src.length]; for (int i = 0; i < src.length; i++) { dest[i] = f.invoke(src[i]); } return dest; } }

class JavaExample? { public static void main (String args[]) { Object[] data = new Integer[5]; data[0] = 2; .. {2, 3, 5, 7, 11} AddTwo? add2 = new AddTwo?[]; Object[] res = Map.invoke(add2, data); } }

---

Ygg2 1 day ago

link

What is my biggest gripe theoretical gripe about Scala is that Trait order is important! Because traits can override common function. A class that is e.g. Ball with traits Shiny and Red is NOT the same as e.g. Ball with traits Red and Shiny. Why? Why complicate testing to a point where you need not only test traits for correct behavior, and not just composing of Traits, but even the order in which they are composed? Why?! Why complicate it so much?

Note: It's been a while since I've seen traits, this might have changed for better.

To me Scala is essentially if Java, Haskell and C++ went to a wizard to have a child. The result is a magical mess. I seriously think that Scala has at least twice as many concepts as C++ and same if not greater number of really weird edge cases. I don't want to keep Scala in my mind, no more than I want to keep Necromicon.

reply

pkolaczk 1 day ago

link

As long as your traits are independent, the order does not matter. Anyway, how do you imagine this could be done better? For order to not matter, the linearization could not be possible and we'd end up with something much more complex and ugly like multiple inheritance in C++ with all the diamond inheritance problems, etc.

reply

epsylon 1 day ago

link

Actually, there's a very elegant solution to the diamond problem. Eiffel does this with a renaming mechanism. Martin Odersky certainly knew about Eiffel's solution when he designed Scala, but my guess is that, he chose the simpler solution of traits, just like most languages, because general multiple inheritance just isn't too useful.

Still, it's a shame that most people interested in programming languages have never heard about Eiffel, or ever read the superb book "Object-Oriented Software Construction" by Eiffel's creator, Bertrand Meyer. Eiffel has many flaws (the most proeminent being the complete lack of support for functional idioms), but it has very good ideas as well. (Perhaps the most popular influence Eiffel had on other languages is Design by Contract and the uniform access principle, which you see in most "modern" languages.)

reply

Ygg2 1 day ago

link

Don't have an answer right this minute, but I think eschewing/forbidding variables in trait would prevent the most obvious such collisions.

If you can't override variables, you can't silently change the structure to depend on trait order. You'd have to explicitly state order.

PS. Of course if you have several same signature methods that call super, which is generated depending on order, same problem persist. So, forbidding variables and calling super elements should deal with most "magical" interactions.

reply

pkolaczk 1 day ago

link

If you can't call super nor use variables then this takes quite a lot of power from traits. IMHO being able to stack many traits one on another is a nice pattern.

reply

Ygg2 1 day ago

link

Yeah, you trade power for easier testing and unforseen side-effects for determinism. IMO it's worth it.

reply

---

" That is, React builds the virtual DOM tree lazily for diffing based on what path in the tree actually changed.

As it turns out, the default shouldComponentUpdate implementation is extremely conservative, because JavaScript? devs tend to mutate objects and arrays! So in order to determine if some properties of a component have changed, they have to manually walk JavaScript? objects and arrays to figure this out.

Instead of using JavaScript? objects, Om uses ClojureScript? data structures which we know will not be changed. Because of this, we can provide a component that implements shouldComponentUpdate by doing the fastest check possible - a reference equality check. This means we can always determine if the paths changed, starting from the root, in logarithmic time.

"

(the point is that allowing the use of reference equality on immutabile objects can be much faster)

---

Csharp try pattern (i forgot exactly what this is, but i think it's a convention where the word "Try" is in the method name, and the return value is a bool indicating success or failure, and the real return value is placed into a variable that was passed in by reference (using an 'out' declaration))

Csharp has a distinction between input parameters which are declared 'ref' and which are declared 'out'. Both are passed by reference, but 'out' has the additional constraint that whatever value was in it when it is passed in must be ignored, e.g. it is ONLY there to put outputs into.


" [1] I tried Haskell and Elixir as candidates, but nested data holders with multiply circular references appear to be problematic to deal with in functional languages. Immutable data presents interesting challenges when it comes to cyclic graphs! The solutions suggested by the respective communities involved considerable boiler-plate. More importantly, the resulting code lost direct correspondence with the problem's structural elements. Eventually, I abandoned that approach.↩ "

asdasf 1 hour ago

link

His dismissal of haskell is rather misinformed. Haskell has no such problems with cyclic data structures. And this:

>Except that abstracting things into functions that take the `old' graph as input, and answer the `new' graph as output are not very convenient or easy.

Is just crazy. Yes, it is very convenient and easy: http://hackage.haskell.org/package/mtl-2.0.1.0/docs/Control-Monad-State-Lazy.html

reply

--

vjeux 13 hours ago

link

The biggest cause of bugs before we used React is the fact that the "state" of the app is leaking all over the place. Symptoms were things like looking for dom nodes that didn't exist, events that triggered on code that was no longer active, bugs due to the app being in a state extremely complex to reproduce with crazy conditions all over the place, race conditions...

React does many things to mitigate those problems and those classes of bugs nearly dropped to 0.

In the end, we found that React actually helped us write more reliable app compared to framework-less Javascript code.

reply

codygman 7 hours ago

link

Mutable state can be a huge time sink. It's why I'm putting more and more time into functional programming, since I think that Haskell (and Clojure) is closer to what programming in the future will be like.

I think many programmers don't respect how much complexity is reduced when state isn't leaking all over the place.

Of course there will always be times when you need state, but many times its worth jumping through a few hoops to make things work without state.

EDIT: If programmers learn C so they can think lower level, they should learn Haskell so they can think in terms of side effects, composability, and the advantages of pure functions. Just a reminder, pure functions can be created in any language though some (Clojure, Haskell) enforce purity.

reply

--

random idea from Doug, not sure if i like it, just writing it down to think about later:

" 3) A

interface representing the members shared by A and B. So if A has a *void Run()* method, and B has a *void Run()* method, but they are from independent inheritance heirarchies, Acommon ancestor of A and B, which implements an interface that contains a *void Run()* method. This would be a form of duck typing used to utilize the maximum amount of static information available to the compiler. "
B is a compiler generated type that implements a compiler generated
B is a class derived from a greatest

--

i dont want to write code that looks like English, but i do think there are some semantics in natural languages that might inspire programming language features.

For example, i say "I am Bayle" and also "I am human" and also "I am red". In programming languages these might not look so similar:

"am i bayle?" may be "i == bayle" "am i human?" may be "isSubclassOf(class(i), human)" "am i red?" may be "i.color == red"

There are times in programming when you don't want this distinction. For example, in some code i create an exception type ConnectionException?. Then i write some code where i catch an exception e and say "if class(e) == ConnectionException? then ...". But then i decide to make a further distinction and so i create a subclass of ConnectionException?, NetworkUnavailableException?. Now in many languages my 'if' statement won't trigger on NetworkUnavailableException?, because although NetworkUnavailableException? is a subclass of ConnectionException?, it is not equal to ConnectionException?, any more than the concept 'bayle' is identical to the concept of 'human'. So the code is less extensible than it could be. I should have said 'if isSubclassOf(class(e), ConnectionException?) then '.. just in case i ever wanted to subclass ConnectionException?