See also [[proj-oot-plChDataLangs?]].
---
'pattern matching' is a key operation here; this is a SWITCH statement on the form of the data ('form' meaning which is the outermost constructor; this is then extended to be a nested top-down pattern from the outermost constructor to some ways in), combined with a destructuring bind in each case.
in fact, we should generalize pattern matching to more general 'patterns'. First, we should replace the design where a 'pattern' consists of nested constructor applications with a design where it consists of nested applications of (classes of constructors); this is like generalizing types to typeclasses. Note that there's no need for the constructors in a class to be from the same (Haskell-style) 'type'; this could come in handy for sum types in which you want to do the same thing when two different constructors within two different types within the sum type are encountered. Also, that should be recursive in the sense that a 'class of constructor' may be another pattern. Second, we should make it more like graph regexs. These two steps can be thought of in analog to regexs (and in precise analogy to graph regexs): the first step, the movement from constructors, to enumerated classes of constructors, to recursively defined patterns, is like moving from regex matching on single characters (explicitly given), to regex matching on character classes, to parenthesized expressions within a regex; and the second step, the movement to graph regexs, is like the addition of ? and * to the language of regular expressions.
---
Now there are languages like Haskell (and Coq, i think?) that put pattern matching in core languages like F-lite. Conditionals are done via 'switches' (or 'case's as they call them, or implicit switches via guaraded/partial pattern-matching function definitions), and data composition is done by pattern matching. Haskell says it's a 'graph reduction' (graph redex) language, and f-lite is compilable to Reduceron 'template code', and by the name Reduceron, and its parallel-ness, you can tell that they might of it in terms of graph reduction. If you think about it, 'pattern matching' can be envisioned in terms of graph reduction, which can be seen by thinking about the Reduceron operational semantics:
An expression in the program is a node in the graph. When an expression refers to another expression using a variable, this is an edge in the graph. As Haskell begins to execute the program, at first these nodes are just lazy thunks containing expressions to possibly be evaluated, but then as execution continues, the runtime determines that it has to actually evaluate some of these expressions. So, we visit the node corresponding to the expression, and we 'reduce' that expression, which involves traversing the local edges to other nodes (and possibly creating new intermediate nodes as we partially apply stuff, i guess). Then, because Haskell probably wants to cache/memoize already reduced expressions, we write the reduced version of the expression back to the node of the graph that corresponds to that expression. So if you had a graph visualization of this, you would see the CPU focus it's attention on various graph nodes, then move its attention to their children, possibly creating new nodes for partially applied functions, and eventually reach primitive 'leaf nodes', substitute them into the expression in their parents that is was trying to evaluate, and simplify that subset of the graph by removing the temporary partially applied nodes that are no longer needed because they have already computed a fully applied version of them and the requesting parent node only needs that fully applied one so now there are no more edges to the partially applied one.
Pattern matching refers to checking the pattern of the descendent nodes at the other side of an edge. Each node is labeled with its type, and if a constructor, with which constructor it is, and the 'pattern' just refers to these labels, except it can also go more than one hop and include the labels of further nested descendents.
So this seems great, and i bet it is great for pure expressions, but what about side effects? Now ordering matters. You can use monads to guarantee an order of evaluation, i guess. But it seems to me that you may also want to just print out debugging and logging stuff, store profiling information, and cache stuff, as nodes are evaluated, without otherwise messing up the program. I can't decide if the monads are a dirty hack, and you should confine this graph redex stuff to pure expressions and just do sequential imperative programming for interactivity, or if monads are just a beautiful way to do interactivity in a graph redex system (leaning towards the former; graph redex for expression evaluation, imperative sequence for interactivity). I do think that you should have something like Oot Statemasks to allow you to have some 'inconsequential' (literally; the constraint is that they don't change the result of the expression, except for weird special cases like disk full exceptions during logging) side-effects even within the graph redex.
---
as a control primitive, pattern matching on ADTs treats the nested query on the form of the constructor as a shortcut for a bunch of nested if-thens.
in this way, since ADTs and pattern matching are generally seen together, ADTs are closely related to a control structure, too.
---
as a data primitive, contrast ADTs to dicts (hashtables; associative arrays) and multidimensional arrays. ADTs can do structs pretty easily (the different fields of the struct are the different arguments to a constructor), but associative arrays seem a bit more expressive than that, and probably multidimensional arrays too. Of course you can build these structures out of ADTs but i bet it's cumbersome to operate on them that way.
---
does that 'uniform' requirement on pattern matching in f-lite prohibit having more than one 'case' that might match any given thing? Because this is still order-independent as long as you have a rule that 'the most specific one matches'. But you can't just look at them one-by-one, so maybe that's what they want to rule out with the 'uniform' requirement.
---
" The operations of an abstract type are classified as follows: · Constructors create new objects of the type. A constructor may take an object as an argument, but not an object of the type being constructed. · Producers create new objects from old objects; the terms are synonymous. The concat method of String, for example, is a producer: it takes two strings and produces a new one representing their concatenation. · Mutators change objects. The addElement method of Vector , for example, mutates a vector by adding an element to its high end. · Observers take objects of the abstract type and return objects of a different type. The size method of Vector , for example, returns an integer. " [1]
---
in Haskell etc, pattern matching against constructors is against (the set of all) fixed length tuples
how to extend that to matching patterns on (non-fixed-length) graphs? maybe graph regex?
---
stuff in Python that is like just a struct, a class with no methods:
1) class C: def __init__(self, a): self.a = a
x = C(1)
2)
x = type('C', (object,), dict(a=1))()
3) import types; x = types.SimpleNamespace?(a=1)
4) import collections; C = collections.namedtuple('C', ['a']); x = C(a=1)
5) import attr
@attr.s class C(object): a = attr.ib()
x = C(1)
https://glyph.twistedmatrix.com/2016/08/attrs.html makes a case that the 'attr' library is the best out of these. In short, (1) and (2) have no __repr__ or __eq__, and (4) (i) has fields accessible as numbered indices, too, (b) compares equal to raw tuples of the same values, (c) is (mostly?) immutable. E doesn't treat SimpleNamespace?, which seems to cover the basics. attrs does have various more advanced features than SimpleNamespace?, though, including __lt__(), asdict(), validators, optional closedness via __slots__ (no additional attributes added later).
---
_asummers 4 days ago [-]
Const does not mean immutability, only immutable references to the outermost pointer. It is equivalent to final in Java. While that solves the issue with numbers changing state, it does not help objects e.g. For that you need something like immutable.js from Facebook.
reply
---
example:
(4:this22:Canonical S-expression3:has1:55:atoms)
"a binary encoding form of a subset of general S-expression (or sexp)...The particular subset of general S-expressions applicable here is composed of atoms, which are byte strings, and parentheses used to delimit lists or sub-lists. These S-expressions are fully recursive. ... While S-expressions are typically encoded as text, with spaces delimiting atoms and quotation marks used to surround atoms that contain spaces, when using the canonical encoding each atom is encoded as a length-prefixed byte string. No whitespace separating adjacent elements in a list is permitted. The length of an atom is expressed as an ASCII decimal number followed by a ":". ... A csexp includes a non-S-expression construct for indicating the encoding of a string, when that encoding is not obvious. Any atom in csexp can be prefixed by a single atom in square brackets – such as "[4:JPEG]" or "[24:text/plain;charset=utf-8]". " [3]
pros:
Links:
---
these XML features are cool:
"The first atom in a csexp list, by convention roughly corresponds to an XML element type name in identifying the "type" of the list. "
also http://json-ld.org/ looks cool
--- i skimmed:
my summary:
CapnProto? is sort-of a successor to protobuf (the guy who made it was a principle designer of protobuf). Unlike Protobuf, it is big into 'zero-copy', meaning that instead of parsing an incoming message, it just keeps the bytes around and provides accessors functions to use it.
Protobuf is supported by Google and is cross-platform. CapnProto? is made by a few guys at a startup and doesn't support Windows very well yet. One commentor found that Flatbuffer's Java implementation was 'more mature' than CaptnProto?. The CapnProto? guy thinks Thrift is ~categorically worse than Protobuf.
Other 'zero-copy' guys include flatbuffers and SBE. I have heard of flatbuffers.
---
I think for Oot serialization and interop, if possibly, we should choose one of these sorts of guys and use it rather than coming up with our own JSON-like format and/or our own pickling format.
a list of contenders is in plChData. These are not all the same type of thing, but whatever, i'm putting them in the same list. Here they are again with a short summary:
Not in the running:
hmm.. a lot of these aren't great.. i guess what i want is a 'JSON+'. Plus dates, plus references.
EDN/Transit seem to provide this, except for references. Transit has a built-in efficient MessagePack? encoding and even a JSON encoding, so that sounds good. TOML looks cool. YAML is popular but the spec is too complex, and i don't like significant indentation. StrictYAML? still has significant indentation. Should check out HJSON, JSON5, SDLang. CSEXPs are a great lower layer but they don't even have a set of types so they don't solve the problem.
So far i guess Transit seems like the best. Having a JSON and a MessagePack? encoding is a pretty big advantage. And, being a Clojure thing designed for language interoperability, instead of for RPC protocols, i bet it'll fit my use case better than CapnProto?.
---
other random schema systems:
nitrogen 781 days ago [-]
If anyone finds JSON Schema in Ruby to be too slow, I developed a Ruby-based schema system that is much faster:
http://rubygems.org/gems/classy_hash
https://github.com/deseretbook/classy_hash
I wrote it for an internal backend system at a small ecommerce site with a large retail legacy.
Edit: Ruby Hashes (the base "language" used by Classy Hash) aren't easily serialized and shared, but if there's enough interest, it would be possible to compile most JSON Schema schemas to Classy Hash schemas.
alexatkeplar 781 days ago [-]
Have you looked contracts.ruby (https://github.com/egonSchiele/contracts.ruby)? I'm sure you could overlap some code
nitrogen 781 days ago [-]
Interesting. It looks like contracts.ruby does for method calls what Classy Hash aims to do for API data.
---
in Python, you can pack together a bunch of reused optional arguments into a namedtuple and pass that along instead, but then it's cumbersome to construct one of these while inheriting the defaults, eg:
disassemble_raw_bytecode_file(infile,outfile, dis_opts=dis_defs._replace(allow_unknown_opcodes=args.allow_unknown_opcodes))
it would be better to have a namedtuple with defaults.
---
Typescript has some neat syntax for destructuring, merging, and choosing fields of objects, called 'object spreads' and 'object rests':
" Object Rest & Spread
We’ve been excited to deliver object rest & spread since its original proposal, and today it’s here in TypeScript? 2.1. Object rest & spread is a new proposal for ES2017 that makes it much easier to partially copy, merge, and pick apart objects. The feature is already used quite a bit when using libraries like Redux.
With object spreads, making a shallow copy of an object has never been easier:
let copy = { ...original };
Similarly, we can merge several different objects so that in the following example, merged will have properties from foo, bar, and baz.
let merged = { ...foo, ...bar, ...baz };
We can even add new properties in the process:
let nowYoureHavingTooMuchFun = { hello: 100, ...foo, world: 200, ...bar, }
Keep in mind that when using object spread operators, any properties in later spreads “win out” over previously created properties. So in our last example, if bar had a property named world, then bar.world would have been used instead of the one we explicitly wrote out.
Object rests are the dual of object spreads, in that they can extract any extra properties that don’t get picked up when destructuring an element:
let { a, b, c, ...defghijklmnopqrstuvwxyz } = alphabet; "
---
Erwin 1 day ago [-]
This might benefit from SQLite's Virtual tables: https://sqlite.org/vtab.html
With Virtual Tables you can expose any data source as a SQLite table -- then you can use every SQL feature that sqlite offers. You can just tell sqlite how to iterate through your data with a few functions, with an option to push down filtering information for efficiency.
You can also create your own aggregates, functions etc.
Here's an article where the author exposes redis as a table within sqlite: http://charlesleifer.com/blog/extending-sqlite-with-python/
reply
erydo 1 day ago [-]
My thoughts went straight to PostgreSQL? Foreign Data Wrappers. Something like that would be really helpful!
reply
---
bipvanwinkle 9 days ago [-]
Out of curiosity what progress has been made in regards to improving the ergonomics of records in Haskell? Stephen references that an answer is in the works, but it looks like it has stalled out.
reply
harpocrates 9 days ago [-]
Actually, a lot has been done and a lot is coming in the near future. GHC 8.0 brought us `DuplicateRecordFields?`, so that we can finally use the same field name for two records.
There is active work done by Adam Gundry to extend this even further [1]. The key part of this is that there will be a new type class so that I can express as a constraint that a type must have a field with a certain name and with a certain type.
Further in the future, but still actively discussed is using overloaded labels as lenses [2]. Past that, I can't imagine anything else I would want records to do.
[1] https://github.com/adamgundry/ghc-proposals/blob/overloaded-... [2] http://stackoverflow.com/questions/38136144/replace-record-p...
reply
wyager 9 days ago [-]
Most people use Lenses for heavily record-oriented programming. They work quite well. They are less convenient than built-in structural syntax like in Javascript, but once you get past the initial inconvenience they are vastly more powerful.
reply
axman6 9 days ago [-]
On the contrary, lenses are far more powerful than what's available in JavaScript? and all other OO languages. Traversals and Prisms give so much power that's lacking in OO
reply
dllthomas 9 days ago [-]
I don't see how that's contrary to what the parent said.
reply
---
" Specifically the immutability providing an easy way to reason about functions via only inputs and outputs.
I feel the the memory model of Rust(single mutable ref or unlimited non-mutable refs) combined with the fact that there are no mutable globals(inside safe code) gives you a much easier system to reason about. "
---
" Many types, one interface
One of Clojure’s core features is its generic data-manipulation API. A small set of functions can be used on all of Clojure’s built-in types. For example, the conj function (short for conjoin) adds an element to any collection, as shown in the following REPL session:
user> (conj [1 2 3] 4)
[1 2 3 4] user> (conj (list 1 2 3) 4)
(4 1 2 3) user> (conj {:a 1, :b 2} [:c 3])
{:c 3, :a 1, :b 2} user> (conj #{1 2 3} 4)
#{1 2 3 4}Each data structure behaves slightly differently in response to the conj function (lists grow at the front, vectors grow at the end, and so on), but they all support the same API. This is a textbook example of polymorphism — many types accessed through one uniform interface.
Polymorphism is a powerful feature and one of the foundations of modern programming languages. The Java language supports a particular kind of polymorphism called subtype polymorphism, which means that an instance of a type (class) can be accessed as if it were an instance of another type.
In practical terms, this means that you can work with objects through a generic interface such as java.util.List without knowing or caring if an object is an ArrayList, LinkedList, Stack, Vector, or something else. The java.util.List interface defines a contract that any class claiming to implement java.util.List must fulfill."
---
" > Cloud Spanner uses a SQL dialect which matches the ANSI SQL:2011 standard with some extensions for Spanner-specific features. This is a SQL standard simpler than that used in non-distributed databases such as vanilla MySQL?, but still supports the relational model (e.g. JOINs). It includes data-definition language statements like CREATE TABLE. Spanner supports 7 data types: bool, int64, float64, string, bytes, date, timestamp[20].
> Cloud Spanner doesn't, however, support data manipulation language (DML) statements. DML includes SQL queries like INSERT and UPDATE. Instead, Spanner's interface definition includes RPCs for mutating rows given their primary key[21]. This is a bit annoying. You would expect a fully-featured SQL database to include DML statements. Even if you don't use DML in your application you'll almost certainly want them for one-off queries you run in a query console. "
---
"Son
A subset of JSON.
JSON contains lots of extraneous details like the difference between 10e2 and 10E2. This helps when writing it by hand, but can cause problems such as making it difficult to serialize and hash consistently.
Son is a subset of JSON intended to remove redundant options. "
https://github.com/seagreen/Son
---
"
onion2k 10 hours ago [-]
JSON doesn't have comments so it's a bad choice for human-editable config. YAML doesn't have an end marker so you can never be sure if you've got the entire file. XML is a huge pain to edit by hand if the schema is complicated, and overly verbose if it isn't. None of them are even close to being safe (for example https://arp242.net/weblog/yaml_probably_not_so_great_after_a...). All of those choices fail your "elegance" test.
TOML is my preferred config file language option where I have a choice - https://github.com/toml-lang/toml - but I suspect that suffers a lot of the same problems.
reply
note: https://arp242.net/weblog/yaml_probably_not_so_great_after_all.html says that YAML has an operator that runs other commands when parsed!
rendaw 8 hours ago [-]
I will capitalize on this derailment to promote luxem, my flexible and minimal JSON alternative: https://github.com/rendaw/luxem#what-is-luxem
reply
Sunset 10 hours ago [-]
Just add comments to JSON, Douglas Crockford can eat his heart out.
reply "
" vince14 8 hours ago [-]
StrictYAML? - https://github.com/crdoconnor/strictyaml
reply
marcoms 8 hours ago [-]
Still does not allow tabs for indentation - same problem as `make` but inverted
reply "
---
https://github.com/rendaw/luxem#what-is-luxem
" luxem is a specification for serializing structured data.
luxem is similar to JSON. The main differences are:
You can specify a type using (typename) before any value. Ex: (direction) up.
You can have a , after the final element in an object or array.
Quotes are optional for simple strings (strings containing no spaces and no ambiguous symbols).
The document is an array with implicit (excluded) [] delimiters.
Comments (written as *comment text*) can be placed anywhere whitespace is.All documents should be UTF-8 with 0x0A line endings (linux-style).
No basic types are defined in the parsing specification, but the following should be used as a guideline for minimum data type support:
bool true|false
int -?[0-9]+
dec -?[0-9]+(\.[0-9]+)?
string
ascii16 ([a-p][a-p])*ascii16 is a binary encoding that is both ugly and easy to parse, using the first 16 characters of the alphabet. "
---
hueving 1 day ago [-]
Can you provide a 'top 3' list of reasons to use Cap'n Proto over Protobufs?
reply
doh 1 day ago [-]
1) Cap'n Proto doesn't encode/decode messages thus it's nuch cheaper for processing and memory management
2) protobuf in the proto3 design doesn't cary default values. So if you have a bool field and want to explicitly send false, well you have to change it to some other type or use the default values all the time
3) protobuf generates incredibly large serialization/deserialization support coce for each template. For some languages like Python in can be in hundreds of kilobytes. Cap'n proto messages are significantly smaller
There is more for CnP? but Protobuf has much better support and is by default used in projects like gRPC. Also new CnP? is lacking speed in new development in comparison to Protobuf.
But I'm using in one of my side projects and I'm very happy with it
reply
StreamBright? 1 day ago [-]
Performance might be one reason to use Cap'n Proto over Protobuf.
http://dbeck.github.io/5-lessons-learnt-from-choosing-zeromq...
reply
---
in Python, it's hard to tell ahead of time if some object supports len(). There should be a method to check this dynamically.
---
yeah this is kinda weird...
---
Python's not allowing you to change 'closure' variables from a containing scope is not actually that bad b/c of the pass-by-reference stuff, eg you can change the contents of a struct even if you can't change the struct variable itself.
eg:
In [6]: def outer(): ...: a = 3 ...: def inner(): ...: print a ...: a = 4 ...: print a ...: inner() ...: print a ...:
In [7]: outer()
UnboundLocalError? Traceback (most recent call last) <ipython-input-7-8493578e1e0e> in <module>()
> 1 outer()
<ipython-input-6-6adaf1641721> in outer() 5 a = 4 6 print a
> 7 inner() 8 print a 9
<ipython-input-6-6adaf1641721> in inner() 2 a = 3 3 def inner():
> 4 print a 5 a = 4 6 print a
UnboundLocalError?: local variable 'a' referenced before assignment
In [8]: def outer(): a = 3 def inner(): print a b = 4 print a inner() print a ...:
In [9]: outer() 3 3 3
In [10]: def outer(): a = {'a':3} def inner(): print a a['a'] = 4 print a inner() print a ....:
In [11]: outer() {'a': 3} {'a': 4} {'a': 4}
---
"
---
" When I was at Google, we had a saying that "The only interesting part of MapReduce? is the phase that's not in the name: the Shuffle". [That's the phase where the outputs of the Map are sorted, written to the filesystem and eventually network, and delivered to the appropriate Reduce shard.] If you don't need a shuffle phase - either because you have no reducer, your reduce input is small enough to fit on one machine, or your reduce input comes infrequently enough that a single microservice can keep up with all the map tasks - then you don't need a MapReduce?-like framework. "
---
ToJans? 12 hours ago [-]
First:
> Elm has an incredibly powerful type system
Near the end of the article:
>Want to decode some JSON? Hard, especially if the JSON is heavily nested and it must be decoded to custom types defined in your application.
IMHO the lack of typeclasses/traits is really hurting Elm. Take haskell f.e.
{-# LANGUAGE DeriveGeneric #-}
import GHC.Generics
data Person = Person {
name :: Text
, age :: Int
} deriving (Generic, Show)
instance ToJSON Person
instance FromJSON PersonWhile I understand Evan's aversion against complexity, it makes me a bit wary about using ElmLang? in production. I am currently using TypeScript?, but if I would need a more powerful type system, I would probably switch to Haskell/PureScript? or OCaml/BuckleScript? instead.
reply
fbonetti 4 hours ago [-]
I really wish people would stop spreading the meme that decoding JSON in Elm is "hard". Yes, Haskell allows you to automatically decode/encode datatypes, but this only works in the simplest of cases. For example, if your backend returns a JSON object with snake-cased fields, but your model has camel-cased fields, `instance ToJSON? Person` won't work; you'll have to write a custom decoder. The automatic decoders/encoders in Haskell only work if the shape of your JSON perfectly matches your record definition.
Writing decoders in Elm is not hard. It's manual. It's explicit. It forces you to specify what should happen if the JSON object has missing fields, incorrect types, or is otherwise malformed. There's a slight learning curve and it can be time consuming at first, but it guarantees that your application won't blow up at runtime because of some bad data. Because of this, JSON decoding is frankly one of my favorite parts about Elm.
Typescript, on the other hand, offers no such guarantee. If you write a function that takes an Int and you accidentally pass it a String from a JSON response, your app will blow up and there's the nothing the compiler can do to help you. Personally, I'd rather write JSON decoders than have my app blow up because of a silly mistake.
reply
Albert_Camus 3 hours ago [-]
Author here. I agree with your points, and in my article I specifically mention that there are benefits to some of the "hardness" of certain tasks in Elm (type-safety in the case of JSON decoding).
But to claim that JSON decoding in Elm is not significantly more difficult than it is in JavaScript? would be misleading. A front end developer that has only written JS will be surprised when he/she cannot just drop in the equivalent of JSON.parse() and get an Elm value out of it. I call it "hard" because there is a bit of a learning curve, and it does require some thought, and quite frankly it takes quite a bit of time if you have a large application like we do.
Moreover, I am not complaining. And I do not think people should be. As I said in the article, the tradeoff is worth it.
reply
somenewacc 2 hours ago [-]
You don't need to write the decoder boilerplate manually to get all those benefits.
For example, here's how you rename a JSON field while serializing/deserializing a data type in Rust:
https://play.rust-lang.org/?gist=1b382bc1572858841d5e392435d...
You just annotate the field with #[serde(rename = "..")]. Here is a list of such annotations
https://serde.rs/field-attrs.html
Serde is also generic in the serialization format; the example I linked uses serde_json, and was adapted from its README here https://github.com/serde-rs/json
reply
MaxGabriel? 2 hours ago [-]
Same for Haskell. This package provides common translations like snake_case to CamelCase?:
https://www.stackage.org/haddock/lts-9.0/aeson-casing-0.1.0....
Giving you automatic encoders/decoders like so:
instance ToJSON? Person where toJSON = genericToJSON $ aesonPrefix snakeCase instance FromJSON? Person where parseJSON = genericParseJSON $ aesonPrefix snakeCase
And the implementation of that package is like 4 simple lines for snake case; it's totally doable on your own for whatever you need https://github.com/AndrewRademacher/aeson-casing/blob/260d18...
I haven't had to do snake_case to CamelCase? with Aeson before, but I have dropped a prefix before, like "userName" -> "name", "userAge" -> "age", and it was pretty easy and well supported.
Also I would note that this isn't as big of a deal for Haskell and Rust, because they're primarily backend languages, so they're more often sending out JSON in whatever form they please, rather than consuming it. In my experience the main consumers (Javascript on the web, Objective-C on iOS and Java on Android) use CamelCase? anyway, so there's a natural compatibility.
reply
wraithm112 2 hours ago [-]
With respect to the snake-cased fields issue, it's actually not that hard to do that with Generic in Haskell.
data Person = Person
{ personFirstName :: Text
, personLastName :: Text
} deriving (Generic) instance ToJSON Person where
toJSON = genericToJSON $ aesonPrefix snakeCase
instance FromJSON Person where
parseJSON = genericParseJSON $ aesonPrefix snakeCaseWhich produces messages like:
{
"first_name": "John",
"last_name": "Doe"
}reply
dmjio 11 hours ago [-]
If decoding json in Elm is considered hard, I'd recommend checking out miso (https://github.com/dmjio/miso), a Haskell re-implementation of the Elm arch. It has access to mature json libraries like aeson for that sort of thing, along with mature lens libraries for updating your model. Here's an example of decoding json with GHC.Generics using typeclasses. https://github.com/dmjio/miso/blob/master/examples/xhr/Main.hs#L130-L131
reply
enalicho 11 hours ago [-]
You don't need to switch a whole language because of JSON decoding. There are many tools that exist to aid you write JSON decoders in Elm. The language is not just about the architecture -- you can implement the architecture in any language, as Redux has proven. What people like about Elm is the compiler and design philosophy that radiates through the entire community. Switching to Haskell won't give you that, as the Haskell community has different priorities.
Here are some JSON tools for Elm:
reply
desireco42 8 hours ago [-]
Oh, this is awesome, json2elm really helps :) thanks, didn't know about it.
reply
a-saleh 10 hours ago [-]
I am kinda waiting for Purescript to mature a tiny bit more in this regard, because it seems that they have something special brewing there, with their polymorphic record type and interesting take on type-level programming.
Because this [1], even though it seems to be just a experiment so far, looks really good.
I.e: doing
type MyTestStrMap? = { a :: Int , b :: StrMap? Int }
and then just calling
let result = handleJSON """ { "a": 1, "b": {"asdf": 1, "c": 2} } """
let newResult = doSomething (result:: Either MultipleErrors? MyTestStrMap?))
is kinda all I ever wanted in these haskell inspired languages?
[1] https://github.com/justinwoo/purescript-simple-json/blob/master/test/Main.purs
reply
---
" agentm 14 hours ago [-]
Project:M36 is an implementation of the proposed design from the "Out of the Tarpit" paper."
https://github.com/agentm/project-m36
---
from [9]
Feature 5: Everything is an iterator
In Python 3, range, zip, map, dict.values, etc. are all iterators.
If you want a list, just wrap the result with list.
Explicit is better than implicit.
Harder to write code that accidentally uses too much memory, because the input was bigger than you expected.
---
from [10]