proj-oot-ootSyntaxNotes6

lmm 1 day ago [-]

I genuinely prefer

    n = if expr trueVal else falseVal

(which is what e.g. Scala does) over meaningless-unless-memorized '?' and ':'

reply

---

i note that i informally use period for 'objectname.field1' alot. Also, i like/use 1:3 for the range operator a lot (1:3 == [1 2 3])

---

for x..y, i am still thinking it might be the way to reference the edge between x and y. However, the 'unix filepath' convention of the parent of x is also attractive. Could always use "x...y" for the edge; however following the unix convention, this would make more sense if it meant 'root'. So could always use "x....y" for the edge. So:

x.y: "start at node x and follow edge labeled y" x..y: "start at node x, follow the 'parent' edge, then follow edge labeled y" x...y: "start at node x, follow the 'root' edge, then follow edge labeled y" x....y: "start at node x, go to the reification of the edge labeled y"

since all nodes exist within some graph (i guess?), the 'root' edge always exists (i guess the 'root' node is the metanode for the graph?). However, the 'parent' edge is not always there, since only some graphs will be trees.

---

without looking back on my earlier thinking about it, for functions i am guessing:

we want MLish notation (juxtaposition, not parens, to apply functions)

BUT, what about 0-ary functions? By the juxtaposition rule, they are evaluated whenever mentioned, eg if 'f' is a 0ary function that evaluates to '4', and if 'g' is the addition functions, then "g (f) 3 == g (4) 3 == 7". Note that we didn't get a type error "can't add a function to an integer", because "f" was applied to the zero arguments to the right of it. So, we may need some notation to 'freeze' a fully-applied function, eg "`f" or somesuch. 'Freeze' can also be used on functions which were non-zeroary but now are zero-ary because they are fully applied: "`g 4 3" is a zero-ary function that returns 7, it is not the number 7. If everything were referentially transparent there would not be a difference, but with impure functions there is a difference.

And what about optional/default arguments? There are two obvious choices (you could do some other things, i guess, like don't set them until the runtime actually evaluates the function, but i guess that's more confusiong): either they are set to their defaults the first time any arguments are given to the function (as soon as possible), OR they are set to their defaults when the function is fully evaluated according to its definition (as late as possible). In either case, the 'freeze' operator prevents them from being set. Right now i think 'as late as possible' is the way to go, eg if "g" is "+" but with some default arguments, then partially applied "g 4" leaves the defaults open but "g 4 3" assign the defaults; "`g 4 3" does not assign the defaults, however.

---

Erlang vs. Java syntax:

" -module(geometry). -export([area/1]).

area({rectangle, Width, Ht}) -> Width * Ht; area({square, X}) -> X * X; area({circle, R}) -> 3.14159 * R * R.

Now we’ll compile and run it in the Erlang shell:

1> c(geometry). {ok,geometry} 2> geometry:area({rectangle, 10, 5}). 50 3> geometry:area({circle, 1.4}). 6.15752

Pretty easy… Here’s some Java that does something similar:

abstract class Shape { abstract double area(); }

class Circle extends Shape { final double radius; Circle(double radius) { this.radius = radius; } double area() { return Math.PI * radius*radius; } }

class Rectangle extends Shape { final double ht; final double width; Rectangle(double width, double height) { this.ht = height; this.width = width; } double area() { return width * ht; } }

class Square extends Shape { final double side; Square(double side) { this.side = side; } double area() { return side * side; } } " -- https://pragprog.com/articles/erlang/

---

so can we do similar, but with type declarations?

iface shape = {num = area(shape);} iface rectangle <: shape = {num width, height; area = {width*height}} iface square <: shape = {num length; area = {length*length}} iface circle <: shape = {num radius; area = 3.14159 * radius * radius}

of course,

---

y'know, the Lisp convention of ending predicate function names with '?' does aid readability. Eg:

(->> (range 5 10) ;; (5 6 7 8 9) List (filter odd?) ;; (5 7 9) (map inc) ;; (6 8 10) (reduce +)) ;; 24 " -- from [1]

---

in clojure, having the keys in a dict literal just be symbols prefixed with ':' leads to some uniformity:

" Nested updates is also easy to do. Works for both maps and vectors.

(assoc-in {:a {:b {:c 1}}} [:a :b :c] 2) ;; ^__target ^__path ^__value

returns {
a {:b {:c 2}}}

(update-in {:a {:b {:c 1}}} [:a :b :c] inc) ;; ^__target ^__path ^__updating function

returns {
a {:b {:c 2}}} " -- from [2]

note the uniformity between the target and the path here (between {:a {:b {:c 1}}} and [:a :b :c]). If dicts were like {a={b={c=1}}}, then the path wouldn't look the same.

what exactly is the ':' doing there? Is it just like a 'quote' in Lisp? Probably not quite; it's also probably alerting the language that this is a variable. I think keywords are translated to unique opaque integer by the language implementation. So

but otoh it's nice to have a non-shifted key like '=' instead of shifted ':' for such a common occurrence.

so mb we should do

(assoc-in {a= {b= {c= 1}}} [a= b= c=] 2) ;; ^__target ^__path ^__value

?

---

---

'keywords' (like :a, i think) vs 'symbols' in clojure: http://stackoverflow.com/questions/1527548/why-does-clojure-have-keywords-in-addition-to-symbols

---

" FizzBuzz?. Here is an example in Java:

public class FizzBuzz? { public static void main(String[] args) { for(int i = 1; i <= 100; i++) { if (((i % 5) == 0) && ((i % 7) == 0)) System.out.print("fizzbuzz"); else if ((i % 5) == 0) System.out.print("fizz"); else if ((i % 7) == 0) System.out.print("buzz"); else System.out.print(i); System.out.print(" "); } System.out.println(); } }

And here is an example in Clojure:

(doseq [n (range 1 101)] (println (match [(mod n 3) (mod n 5)] [0 0] “FizzBuzz” [0 _] “Fizz” [_ 0] “Buzz” :else n)))

....

scala

 public boolean hasUpperCase(String word) {if (null != word) return any(charactersOf(word), new Predicate() { public boolean apply(Character c) { return isUpperCase(c); } }) else return false; }

And here is the same in Clojure:

(defn has-uppercase? [string] (some #(Character/isUpperCase %) string)) " -- http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end

---

imh 15 hours ago [-]

Looking through the really abstract scala code they linked brings up a problem that really frustrates me in haskell. Why doesn't anybody document their really abstract code? You know it's going to be confusing, so why not help out? If I have a type like

    def foreignKey[P, PU, TT <: AbstractTable[_], U] ...

It's not sufficient to document the function's arguments. You also need to document the type variables!

...

tel 12 hours ago [-]

I mostly agree with you, but I also think a bit part of what Haskell-like FP shows is how often your code is invariant to so many things that the variables no longer have any sense to them whatsoever.

That doesn't really excuse badly named variables when that doesn't hold. It's more that it's something sort of novel to a lot of Haskell-like programmers and so we all get excited about it and probably overdo it somewhat. But on the other hand, I think it's very well-justified often enough.

For instance, with

    class Functor f where
      fmap :: (a -> b) -> (f a -> f b)

there are many words you could give to f, a, and b but they're essentially all misleading.

---

Elixir pattern matching example:

" def example_pattern(%{ "data" => %{ "nifty" => "bob", "other_thing" => other}}) do IO.puts(other) end "

"Structs can be defined as types and then utilized in pattern matching as well."

more detail and more examples: https://quickleft.com/blog/pattern-matching-elixir/

" A Note About Immutable Variables

Elixir is a functional language which means that variables can’t change value. You can, however, reuse variable names. If you set x = 2 and then set x = 3, Elixir doesn’t mind.

iex> x = 2 2 iex> y = 3 3 iex> x = y 3 iex> x 3 iex> ^x = 2 # force no reusing of variables (MatchError?) no match of right hand side value: 2

Behind the scenes, though, there’s still a version of x that is set to 2. For instance, if you pass x to another function that’s being run concurrently and then change x, the concurrent function will still have the original value of x. You can force variables to not be reused by using ^, but ^x = 3 would still match because xactually is 3. "

in Elixir when you match something, it's an expression, and the return value is the entire thing matched; if the match fails, that's an exception, eg:

" iex> [head

[1, 2, 3, 4] iex> head 1 iex> tail [2, 3, 4]
tail] = [1, 2, 3, 4]

iex> x = 2 2 iex> y = 3 3 iex> 2 = x 2 # waaaat? iex> 2 = y (MatchError?) no match of right hand side value: 3 "

---

other languages that have symbols/keywords (Clojure, Ruby, Elixir) use :colonprefix. Should we use that too? Right now we're using UPPERCASE for that.

---

Elixir uses "do:" for EOL-terminated "do" (usually its "end"-terminated: "do...end")

eg

" def square([]), do: [] def square([head

tail]) do
    [head * head | square(tail) ]
  end"

---

this guy identifies Go interfaces with pattern matching in eg Elixir:

" Both use matching to define function calls, Go via interfaces and Elixir via pattern matching.

Even though Go allows functions to be called by a specific interface type, g.area(), it’s essentially the same thing as calling area(g). ... The biggest difference here is that with Go, the patterns are defined outside of a function for reuse but can result in creating a lot of duplicate interfaces if they aren’t well organized. Elixir can’t reuse the patterns as easily, but the pattern is always defined in the exact place where it is used. " -- [3]

certainly we want first-class, re-usable patterns in Oot. So far i am thinking our pattern matching will be with 'graph regexes'; perhaps these will be the same as our interfaces, i'm not quite sure yet. Our interfaces also are subclassable, and have defaults.

---

 sbuttgereit 18 hours ago [-]

> Hence the nightmares with levels of quoting and escaping.

PostgreSQL? has an interesting approach to this problem that I've found really straight forward and allows me to express text as text without getting into strange characters. What they've done is allowed using a character sequence for quoting rather than relying on a single character. They start with a character sequence that is unlikely to appear in actual text: $$, it's called dollar quoting. Beyond just $$, you can insert a word between the $$ to allow for nesting. Better explained in the docs:

https://www.postgresql.org/docs/current/static/sql-syntax-le...

What the key here is that I am able to express string literals in PostgreSQL? code (SQL & PL/pgSQL) using all of the normal text characters without escaping and the $$ quoting hasn't come with any additional cognitive load like complex escaping can (and before dollar quoting, PostgreSQL? had nighmareish escaping issues). I wish other languages had this basic approach.

reply

DougWebb? 17 hours ago [-]

Perl's had something like that for a long time: quote operators. You can quote a string using " or ' (which mean different things), and you can quote a regex using /. But for each of these you can change the quote character by using a quote operator: qq for the double-quote behavior, q for the single-quote behavior, and qr for the regex behavior. (There are a few others two, but I used these most often.)

    my $str1 = qq!This is "my" string.!;
    my $str2 = qq(Auto-use of matching pairs);
    $str2 =~ qr{/url/match/made/easy};

The work I did with Perl included a LOT of url manipulation, so that qr{} syntax was really helpful in avoiding ugly /\/url\/match\/made\/hard/ style escaping.

reply

brennen 13 hours ago [-]

Perl is still, I think, the gold standard for quoting and string manipulation syntax. I am to this day routinely perplexed by the verbosity and ugliness of simple operations on strings in other languages.

(Of course, this may also be one of the reasons that programmers in its broad language family have a pronounced tendency to shoehorn too many problems into complex string manipulation, but I suppose no capability comes without its psychological costs.)

reply

i336_ 7 hours ago [-]

Yup, the 8085 CPU emulator in VT102.pl[1] uses a JIT which is essentially a string-replacement engine.

[1]: http://cvs.schmorp.de/vt102/vt102 (note - contains VT100 ROM as binary data, but opens in browser as text)

reply

wruza 16 hours ago [-]

Perl also supports heredocs — blocks of full lines with explicit terminator-line:

  print '-', substr(<<EOT, 0, -1), '!\n';
  Hello, World
  EOT

Prints:

  -Hello, World!

iirc sh-shells also have that.

reply

leephillips 17 hours ago [-]

This seems like an awesome feature. I wish Python had something like it.

reply

masklinn 17 hours ago [-]

Python has triple-quoted strings which generally do the trick, and uses prefixes for "non-standard" string behaviours (though it doesn't have a regex version IIRC, Python 3.6 adds interpolation via f-strings)

    str1 = f"""This is "my" string."""
    str2 = """Auto-use of matching pairs"""
    str3 = r"""/url/match/made/easy"""

reply

leephillips 17 hours ago [-]

Yes, I've belatedly caught on to using triple-quotes to avoid some escaping. But I didn't know about the f-strings - thanks! (I'll be using those when I start using 3.6.)

reply

---

ASCII control codes discussion:

http://www.catb.org/esr/faqs/things-every-hacker-once-knew/

---

in Golang, "cuddled else is mandatory. Parse error otherwise"

a Scala guy calls this a WTF [4]

---

from golang: " Conversion rules

...

return Person{ Name: aux.Name, AgeYears?: aux.AgeYears?, SSN: aux.SSN }

...

Since Go 1.8 you can simply do:

return Person(aux)

Both types still need to have:

    same sequence of fields (the order matters)
    corresponding fields with same type.

"

---

some notes on golang, as i go through the tutorial (tour of go) again; i think i already wrote notes similar to these somewhere, but i'll write them again

for loops and while loops are unified:

	for i := 0; i < 10; i++ {
		sum += i
	}
  for ; sum < 1000; { sum += sum }

you can drop empty semicolons to make 'for' an even better 'while':

	for sum < 1000 {
		sum += sum
	}

we should just use 'loop'

in both loops and 'if's, you can preceed the test with a statement:

	if v := math.Pow(x, n); v < lim {
		return v
	}

if both loops and 'if's, the test does not need ()s (unlike Perl), but the body does need {}s:

	if x < 0 {
		return sqrt(-x) + "i"
	}

"

Variables declared without an explicit initial value are given their zero value.

The zero value is:

    0 for numeric types,
    false for the boolean type, and
    "" (the empty string) for strings.

"

"Outside a function, every statement begins with a keyword (var, func, and so on)..."

" The example shows variables of several types, and also that variable declarations may be "factored" into blocks, as with import statements."

import ( "fmt" "math/cmplx" )

var ( ToBe? bool = false MaxInt? uint64 = 11 Every release replaces more keywords with punctuation marks.

I can't think of any other examples. Closures get used heavily in JS and arrow functions do make their use quicker and more concise, as well as fixing `this`. Compared to other languages like Scala, OCaml, Elm and Haskell, with symbols for functional combination and pipelining, pattern matching, and whatever you call the thing where you can miss out parameters and use underscores in function bodies in Scala ... JS is relatively light on symbols.

reply

---

complaint about coffeescript

 williamdclt 4 hours ago [-]

We did. And had to "decaffeinate" everything a few months later because it is hell to work with implicit returns, objects without brackets, no spread operator, significant whitespace, no variable declaration keyword...

---

forty 29 minutes ago [-]

At work we started node with node 0.6, CS (((coffeescript))) was nice at the time, it would bring cool stuffs such as arrow functions. Nowadays we are using Typescript, and we love it. To be honest, I am quite happy we are getting rid of CS, the lack of variable declaration specific syntax (as opposed to assignment statements) is really a big bug of the language IMO which can lead to really awful bugs in your code if you are not careful. We have been using https://github.com/decaffeinate/decaffeinate to help getting rid of our CS files, it works quite well.

reply

---

from [5]

python3 advanced unpacking

a, b, *rest = range(10) first, *_, last = f.readlines()

---

from [6]

python3 keyword-ONLY arguments (cant be given positionally)

def f(a, b, *, option=True):

No more, "Oops, I accidentally passed too many arguments to the function, and one of them was swallowed by a keyword argument".

no more "I reordered the keyword arguments of a function, but something was implicitly passing in arguments expecting the order" make your APIs "future change proof" by preventing them from relying on the positions of keyword arguments

---

evincarofautumn 96 days ago [-]

Yup, overloading existing operators for new meanings is smelly. It’s better if the language allows creating new operators, so you’re not stepping on someone else’s semantics. For example, Haskell has “</>” in the “filepath” package as an alias of “combine”. Custom operators can certainly be abused, but most packages that define operators only do so for good reason, and crucially they’re still searchable with Hoogle.

 Animats 96 days ago [-]

You can even set the precedence of your new operator in Haskell, and confuse everybody. Maybe beyond +, -, *, and /, you should have to use parentheses.

Here's C's operator precedence.[1] All 15 levels.

Think of the maintenance programmer who will have to fix this.

[1] http://en.cppreference.com/w/c

---

moonscript (luascript alternate syntax):

" It also adds table comprehensions, implicit return on functions, classes, inheritance, scope management statements import & export, and a convenient object creation statement called with.

...

Overview of Differences & Highlights

A more detailed overview of the syntax can be found in the reference manual.

    Whitespace sensitive blocks defined by indenting
    All variable declarations are local by default
    export keyword to declare global variables, import keyword to make local copies of values from a table
    Parentheses are optional for function calls, similar to Ruby
    Fat arrow, =>, can be used to create a function with a self argument
    @ can be prefixed in front of a name to refer to that name in self
    ! operator can be used to call a function with no arguments
    Implicit return on functions based on the type of last statement
    : is used to separate key and value in table literals instead of =
    Newlines can be used as table literal entry delimiters in addition to ,
    \ is used to call a method on an object instead of :
    +=, -=, /=, *=, %=, ..= operators
    != is an alias for ~=
    Table comprehensions, with convenient slicing and iterator syntax
    Lines can be decorated with for loops and if statements at the end of the line
    If statements can be used as expressions
    Class system with inheritance based on metatable’s __index property
    Constructor arguments can begin with @ to cause them to automatically be assigned to the object
    Magic super function which maps to super class method of same name in a class method
    with statement lets you access anonymous object with short syntax

About

The syntax of MoonScript? has been heavily inspired by the syntax of CoffeeScript?. MoonScript? is CoffeeScript? for Lua.

MoonScript? would not have been possible without the excellent tool LPeg for parsing.

"

---

" Fortran array handling features

Arrays (or in physics-speak, matrices) lie at the heart of all physics calculations. Fortran 90+ incorporates array handling features, similar to APL or Matlab/Octave. Arrays can be copied, multiplied by a scalar, or multiplied together quite intuitively as:

A = B A = 3.24*B C = A*B B = exp(A) norm = sqrt(sum(A2))

Here, A, B, C are arrays, with some dimensions (for instance, they all could 10x10x10). C = A*B gives an element-by-element multiplication of A and B, assuming A and B are the same size. To do a matrix multiplication, one would use C = matmul(A,B). Almost all of the intrinsic functions in Fortran (Sin(), Exp(), Abs(), Floor(), etc) can take arrays as arguments, leading to easy of use and very neat code. Similar C++ code simply does not exist. In the base implementation of C++, merely copying an array requires cycling through all the elements with for loops or a call to a library function. Trying to feed an array into the wrong library function in C will return an error. Having to use libraries instead of intrinsic functions means the resulting code is never as neat, as transferable, or as easy to learn.

In Fortran, array elements are indexed using the simple syntax A[x,y,z], whereas in C++ one has to use A[x][y][z]. Arrays are indexed starting at 1, which conforms to the way physicists talk about matrices, unlike C++ arrays, which start at 0. The following Fortran code shows a few more array features:

A = (/ i , i = 1,100 /) B = A(1:100:10) C(10:) = B

First a vector A is created using an implicit do loop, also called an array constructor. Next, a vector B is created from every 10th element of A using a ‘stride’ of 10 in the subscript. Finally, array B is copied into array C, starting at element 10. Fortran supports declaring arrays with indices that are zero or negative:

double precision, dimension(-1:10) :: myArray

A negative index may sound silly, but I have heard that they can be very useful – imagine a negative index as an area with ‘extra space’ for annotations. Fortran also supports vector-valued indices. For instance, we can extract elements 1, 5, and 7 from a Nx1 array A into a 3×1 array B using:

subscripts = (/ 1, 5, 7 /) B = A(subscripts)

Fortran also incorporates masking of arrays in all intrinsic functions. For instance, if we want to take the log of a matrix on all of the elements where it is greater than zero we use

log_of_A = log(A, mask= A .gt. 0)

Alternatively we may want to take all the negative points in an array and set them to 0. This can be done in one line using the ‘where’ command:

where(my_array .lt. 0.0) my_array = 0.0

" -- [7]

---

kinda neat but probably too much effort to type:

[8]

kazinator 110 days ago [-]

I have some reservations about how this is designed.

All we need are columns labeled with conditions. We don't need rows. And the matrix can just have true/false/don't-care entries, with code assigned to rows.

Concretely, say we have these conditions:

   (> x y)  (stringp foo) (oddp n)

Right? Okay, so now we can identify the combinations of these and assign them to code like this:

   (> x y)  (stringp foo) (oddp n)
      #t                     #t       (whatever)
                #t           #t       (other-thing)
      #t        #f                    (etc)

There could be a way to mark some of the rows as having "fall through" behavior. If they match, the expression is evaluated (for its side effects, obviously), but then subsequent rows can still match.

This could be worked into a straightforward S-exp syntax without any diagramming shennanigans:

   (table-cond
      (> x y) (stringp foo) (oddp n)
      #t      ()            #t       (let [...]
                                       (whatever))
      ()      #t            #t       (other-thing)
      #t      #f            ()       (etc))

Here, don't cares are denoted using (). Something else could be chosen.

A #f entry means "must be explicitly false". A blank column entry is a "don't care"; that condition is not taken into account for that row.

porges 110 days ago [-]

At that point aren't you just doing normal Racket pattern matching?

    (match
      (list (> x y) (stringp foo) (oddp n))
      [(list #t     _             #t      ) (whatever)]
      [(list _      #t            _       ) (other-thing)]
      [(list #t     #f            _       ) (etc)])

... or is that the joke :)

kazinator 110 days ago [-]

There you go. That just needs a very light syntactic sugar and it's there. The elements of the lists are constants, so you want to use a literal, and the quoting of that can all be hidden. The (list ...) around the incoming expressions can be trivially hidden also.

The one thing I suspect match probably doesn't do is feature a fall-through mechanism that I alluded to; say we want (other-thing) for its side-effect, and then still evaluate (etc) if (> x y).

(Under no circumstances do we want fall-through to be opt-out, like in the C switch statement with its forgotten break bugs, but opt-in.)

Also, this mechanism could be optimized. Since the pattern lists must only contain #t, #f and _, they can be validated to contain nothing else. Accordingly, they can be arithmetically encoded (two bits per symbol), and subject to a numeric dispatch. '(#f _ #t) is a six bit number; '(_ #f #f) is another six-bit number and so on. Arguably, match itself could do that, but it's rather specialized.

---

" REPL

The grammar of Go needs to be changed to support line-by-line evaluation. Top-level constructions in a .go file are declarations, not statements. There is no sequence in the declarations, all are evaluated simultaneously across all the files in a package. A declaraction on an earlier line in a file can happily refer to a name declared later in the file. If you want to type Go declarations into a REPL, nothing can execute until you declare the package done.

So the first thing you need to do to define a REPL for Go is to step down a level. Instead of declaractions, process statements. Pretend everything typed into the REPL is happening inside the func main() {} of a Go program. Now there is a sequence of events and statements can be evaluated as they are read.

This shrinks the set of programs you can write dramatically. In Go there is no way to define a method on a type inside a function (that is, using statements). There is a good reason for this: all the methods of a type need to be defined simultaneously, so that the method set of a type doesn’t change over time. It would lead to a whole new class of confusing errors if you could write:

func main() { type S string var s S _, ok1 := s.(io.Reader) func (S) Read(b []byte) (int, error) { ... } _, ok2 := s.(io.Reader) fmt.Println(ok1, ok2) Prints: false, true }

That is why you cannot write that in Go.

So for the language to be REPL-compatible it needs serious grammar surgery, which would make a REPL possible, but hurt the readability of big complex programs.

Neugram has its own statement-based method syntax https://github.com/neugram/ng/blob/master/eval/testdata/method2.ng , which diverges in a small but significant way from Go. (Though it won’t be properly functional until the Go generating backend is complete.)

"

---

crawshaw 18 hours ago [-]

I made this point poorly, because I agree with you that Go is quite easy to read and very predictable.

With more words: I have written a parser and type checker for an ML-style language, with parametric types and several other neat tricks in it, and I've now written a parser and type checker for a good subset of Go. The latter has been far more work. I am not entirely sure how to explain the work. Go has lots of syntactic and type conveniences that are easy to use and read, but quite difficult to implement.

As there are few implementers and many users, I think the original designers of Go picked well when they put the work on us implementers.

reply

ainar-g 12 hours ago [-]

Can you elaborate on what syntactic conveniences are difficult to implement and why? Language design is on of my hobbies, so I really would like to know that.

reply

crawshaw 7 hours ago [-]

One good example is untyped constants. They form a large extension to the type system only present at compile time. It is a good example because there is an excellent blog post describing them in detail: https://blog.golang.org/constants

In particular note the difference between

    const hello = "Hello, 世界"

and

    const typedHello string = "Hello, 世界"

one of these can be assigned to named string types, the other cannot.

As a user of Go I find untyped constants to be extremely useful. They almost always do what I want without surprise. Implementing them however, is not trivial.

A tricker example is embedding. It adds a lot of complexity to the type system. As a user, it can be a bit surprising, but I admit it is very useful.

reply

---

kotlin:

kotlin:

    data class Animal(dog: String, numberOfLegs: Int)  
    val dog = Animal(
        name = "dog",
        numberOfLegs = 4
    )

---

" These [newer] languages are all infix. Which is extraordinarily clumsy for anything but arithmetic expressions. And even those are comfortable only because we learned them in Algebra 101. Do you remember the learning curve? "

---

mb prefer characters in https://en.wikipedia.org/wiki/GSM_03.38 ?

---


Footnotes:

1. 64 - 1 z complex128 = cmplx.Sqrt(-5 + 12i) )

https://blog.golang.org/gos-declaration-syntax

note: they say that although in almost all cases, the type comes after the name, in the case of pointers, they still use a PREFIX asterisk (prefix *), b/c otherwise there would be ambiguity with * as multiplication. This special case apparently causes you to have to use parens a lot in type expressions.

"Switch without a condition is the same as switch true. This construct can be a clean way to write long if-then-else chains."

Array literal syntax:

[3]bool{true, true, false}

Slice literal syntax:

[]bool{true, true, false}

slicing syntax: a[0:5]

---

https://docs.microsoft.com/en-us/dotnet/articles/fsharp/language-reference/symbol-and-operator-reference/

---

mb

obj.method(args)!

could be shorthand for

obj = obj.method(args)

eg in Python, instead of 'set.union' being immutable data and 'set.add' being a mutation, you would make set.add immutable too and do the following to transform that into a mutation:

s = set() s.add(3)!

you could then still use '!' at the end of method names to mark mutations:

print!('hi')

although i dont like having any shifted characters in 'print'; still, mb that could be an exception.

---

mb function(args) should work, and be optional (the same as 'function (args)', in the usual MLish syntax)

---

 sixo 11 hours ago [-]

'Concurrency-by-default' is similar to a notation I've been using to map out async service calls. It's just this: lines are terminated with "," or ";". A comma doesn't block and all comma-separated lines are executed in any order, while a semicolon blocks. Names are only usable when a semicolon is reached, and a semicolon unblocks flow when all preceding names are bound. Probably code is scoped into { } blocks. So a lambda is like "pyth_distance = {x, y; sqrt(x^2 + y^2)}". A series of async callbacks would be given by an inner block {x = call1(), y = call2(); pyth_distance(x,y)}, allowing you to do any manipulations you can do with normal code.

Might try making a toy language out of it eventually.

reply

runT1ME 10 hours ago [-]

You're close to inventing monads! I did this once too for async reasons also, and it's how I wandered into the world of Monads and functional programming. I would definitely take a look at Scala's futures and what it means to be a monad. This is probably the best way to internalize them. :)

reply

 chubot 9 hours ago [-]

Hm the terminators are much like shell, e.g.

  1. synchronous sleep 1 ; sleep 2 ;
  2. parallel sleep 1 & sleep 2 & wait wait

The results have to be files... in shell you think of the file system as your "variables", so everything is somewhat global.

reply

achamayou 11 hours ago [-]

That's exactly what

(concurrent statements) and ; (sequential statements) do in Esterel.

reply

already added to ootToReadsCondensed:

---

go syntax from reading from a channel in an expression (ie without assigning to a variable):

  fmt.Println(<-chan)

---

something about how it's good to have the scanner be able to identify keywords:

http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-May/012560.html

---

. could have a dual use:

obj.method(...) could call 'method' of 'obj', but obj.fn(...) could also be shorthand for fn(obj,...)

---

phs2501 6 hours ago [-]

Oddly, as someone who was reasonably into Common Lisp, what really viscerally turned me off to Clojure was the use of square brackets.

This sounds petty but I actually have some rationalization for it. Once you learn to read Lisp (mostly looking at the indentation and ignoring the parens) it's really nice that there's only one kind of delimiter in the language. It allows a lot of easy structure editing with a decent editor. I had emacs set up to use the unshifted square bracket keys to create and exit balanced parenthesis pairs. Once you got used to this, this was really pleasant to write and mutate code.

As soon as I saw Clojure, I knew that my setup would unavoidably get twice as complicated because they added another delimiter that's used in random places in the code. (Why are function arguments a vector instead of a list? Just because the designer thinks it looks better? Why are let bindings a vector? Why are ordinary function calls and function bodies a list? It just feels arbitrary.) That in and of itself really turned me off. (The fact that it's the JVM really didn't help either, but that was actually secondary to the above.)

Other issues for me were the lack of two layers in the LET statement (i.e. it's (let [a 1 b 2] ...) rather than (let ((a 1) (b 2)) ...). To some that probably "looks better" but it kills the ability to use emacs's transpose-sexps to swap the order of your let bindings around.

All in all the syntax just didn't seem well thought out to me. OTOH to non-lispers it probably looked better, so maybe that was the goal.

reply

_halgari 4 hours ago [-]

What's interesting is that for me Clojure was my first lisp. One of the main reasons I never learned lisp before Clojure was all the parens that made the language impossible to read. Clojure cleans up the "normal" lisp syntax quite a bit, and that made it a lot more palatable.

Beauty is in the eye, and all that, but CL code still makes me want to claw my eyes out.

reply

kbp 5 hours ago [-]

I agree, the decrease in editor-friendliness is really frustrating, coming from Lisp. The lack of parens around cond clauses is also annoying to me, because it makes it so that you can't really insert a newline after a predicate for legibility, since it makes the then-part flush with the other predicates (and similarly for let bindings).

reply

agumonkey 2 hours ago [-]

I feel the same. Clojure had lots of valuable reasons for using [] but it made it step one foot aside from the lispiness I got to like; and it's not really about editor support it's something else.. I kinda loved the idea of sexp being 99% of the idea of lisp.

reply

---

IgorPartola? 9 hours ago [-]

As a heavy user of Python, and someone who grew up with the "curly braces" languages, I have a question for y'all. Is this really readable?

    (setv result (- (/ (+ 1 3 88) 2) 8))

Or rather is it more readable than

    result = ((1 + 3 + 88) / 2) - 8

I just... Do you just get used to this, or is it something that you have to keep struggling with? Especially given that the latter is how we do math the rest of the time?

reply

mthomas 8 hours ago [-]

I find that both are equally (un)readable. However, for expressions that are that long, I'm inclined to write:

    (setv result (-> (+ 1 3 88) 
                     (/ 2) 
                     (- 8)))

reply

---

" Julia for example has quite a nice static type system and compiler, but also a uniform syntax that's friendly to dynamic metaprogramming and JIT'ing. "

---

fiatjaf 9 hours ago

parent flag favorite on: Announcing CoffeeScript? 2

Oh, object literals without the braces, how much I miss you!

...I miss how concise functions are in CoffeeScript? and also miss being able to do object literals without the braces. ..

..I miss some elements of coffeescript (the existential operator was really cool), but I think in some ways it served its purpose: large parts of it were lifted for use in es6....

...I have been looking at React (will probably skip due to the patent license) and Vue, but cannot imagine going back to semicolons and (excessive) curly braces and parens...

methyl 10 hours ago

parent flag favorite on: Announcing CoffeeScript? 2

> cannot imagine going back to semicolons

Actually, Javascript does not require semicolons to be present. If you skip them, the only edge-case is when you try to start your line with `[` or `(`. In that case, you can prefix the line with `;`.

jdmichal 9 hours ago

parent flag favorite on: Announcing CoffeeScript? 2

Omitting semicolons breaks the "delete a line in isolation" rule, unless you begin the line with a semicolon instead. Which is even noiser than just having them at the end of the line, where they are unconsciously glossed over by anyone who's ever worked in a language with C-derived syntax.

 Bahamut 8 hours ago | parent | flag | favorite | on: Announcing CoffeeScript 2

There is a nasty edge case if you concatenate scripts and use an IIFE in a script

camus2 11 hours ago

parent flag favorite on: Announcing CoffeeScript? 2

The only reason most people used CS was short hand function declaration and classes,

 DiThi 10 hours ago | parent | flag | favorite | on: Announcing CoffeeScript 2

> The only reason most people used CS was short hand function declaration and classes

And for loops (which double as list comprehensions). And the existential operator. And "this." shorthand. Among other reasons. To my team it's also more clean-looking than Python (which used to be my favourite language), esp. when using a good syntax highlighter like Atom's (or at least that differentiates function calls, like Github's).

Yes, there's not much difference now from ES6 regarding features, but for some of us it's still worth it.

How much productivity do I gain by replacing "function()" with "() =>"? Round and round we go.

Before anyone tells me, yes I know the semantics are slightly different, but muddling the semantics with arcane syntax is not an optimal solution.

reply

awinder 2 hours ago [-]

I’m going to tell you anyways :-D. function() { ... }.bind(this) is the correct comparison to () => { ... }, which is not only a productivity gain but a readability gain.

reply

city41 6 hours ago [-]

I think the comparison to perl isn't fair. It's pretty accepted that () => is a function type construct. It mimics lambdas and function calls in many other languages. Perl did really crazy stuff like storing regex results in obscure global variables.

As for () =>, it can make a big difference in scenarios like `myArray.map(a => a.foo)` versus `myArray.map(function(a) { return a.foo })`

reply

fineline 5 hours ago [-]