The Spirit parser framework - metaprogramming and no macros
On the topic of metaprogramming and macros - there is one excellent example of metaprogramming with minimal macros - the Spirit parser framework (part of the Boost C++ libraries).
The developer who created Spirit is very much against macros - (he says so very forcefully in the docs) and I agree with him.
Anyway - in response to the "let's make a programming language" challenge, rather than my suggesting a completely new language, I'm keen to see Orca (the open-source Rebol clone) developed. At present, it's still pretty basic, but has great potential (given that Syllable will now use it as their default scripting language).
Of all programming languages, only Forth comes close to the "power- to-weight" ratio that Rebol has. Python, Ruby, Perl - all good, but pretty heavyweight. One of Rebol's best aspects is that DSLs are trivial to create, using "dialects". So, there's the "create your own language" thing taken care of ... :-) By obsidian at Sun, 2006-06-04 21:42
| login or register to post comments |
Isn't Spirit written with template metaprogramming techniques? These are C++'s ad hoc version of macros. We aren't talking about the C preprocessor here. Or do you mean the author limited his use of template metaprogramming? By rebooted at Sun, 2006-06-04 22:02
| login or register to post comments |
---
copx 1 day ago
| link |
Some of my relatively obscure favorites:
Dylan: http://opendylan.org/
Euphoria: http://www.rapideuphoria.com/
reply
zem 1 day ago
| link |
i've always felt that had i discovered euphoria earlier in my programming life, i would have really liked it. these days i'd miss all the features more recent languages have borrowed from lisp and/or ml, but it would have been an awesome second or third language (after basic and c).
reply
copx 1 day ago
| link |
I discovered Euphoria back in the DOS days when it was still commercial. I agree that it seems a little dated these days but I still fondly remember it.
Also it is still a great language for non-professional programmers and still has an active community there. Note the recently updated SDL2, SFML2, and Allegro bindings for example.
reply
iso-8859-1 1 day ago
| link |
MetaML?, a language with as many levels of macros as you'd like: http://www.cs.rice.edu/~taha/publications/journal/tcs00.pdf
(implementation as MetaOcaml?)
reply
zem 1 day ago
| link |
if you're looking for interesting systems languages, clay looks good too: http://claylabs.com/clay/
reply
InAnEmergency? 1 day ago
| link |
You may be interested in the Fledgling Languages List: http://fll.presidentbeef.com/
reply
doublec 1 day ago
| link |
For an interesting systems programming language try ATS [1]. It's an ML style functional programming language with dependent types, linear types, macros, generics and compiles to C giving reasonable portability.
reply
epsylon 1 day ago
| link |
Check out fogus' lists of "Perlis languages": http://blog.fogus.me/2011/08/14/perlis-languages/ http://blog.fogus.me/2011/10/18/programming-language-develop...
reply
-- " Frankly, I guess I've become something of a functional zealot, so my problem with Go is that it's so stubbornly imperative. This is why I can't get behind Go, as much as I want to. I feel like it doesn't consider many of the lessons that have been learned about FP making life easier. Set operations (map, filter, reduce) are insanely common, yet doing them imperatively sucks to the point of discouraging them. Excessive mutability is difficult to reason about. Nobody expects the nil/null/none inquisition. We tend to forget about side effects. Without an extensible language, which requires truly first-class functions, you're at the language designer's mercy.
Hell, Scala probably isn't the be-all, end-all answer to all of this. I just don't think that a doggedly-imperative, non-composable, statement-oriented language is the future of programming "in the large", not when the past has shown us that we tend to produce buggy, unmaintainable software this way. I'm pragmatic enough to realize that pure FP isn't a solution, but I feel strongly that sticking to traditional imperative because it's familiar is a costly mistake.
"
---
" Now, static typing is just one kind of static assertion[1] I'd like to have. Side effect isolation (i.e. language-enforced purity) would be another feature I'd like to see become common. For example, the D programming language has a "pure" keyword that lets you mark a function as side-effect free. (In Haskell, all function are pure by default, and you escape the confines of purity via monads.)
I'd like to do research into (inventing) other forms of static assertions. One thing that's been on my mind lately, has been static "complexity assertion" for programs. I don't know if this even possible, but it would be nice to be able to ascertain before running a program, certain time & space complexity bounds on it. This would perhaps require us to drop the programming language itself to some level "slightly below" Turing machines -- but this in itself could be achieved via a keyword like D's "pure" or something more elegant. (Note: my point is not that we're going to make the entire language non-Turing complete -- but rather that we'll have subsets of the language that are not, and this reduction of power would/could bring with it a greater possibility for statically asserting stuff.)
[1] FYI I made this term up. Let me know if there's something better that describes this. "
upvote
sridharvembu 8 hours ago
| link |
Your point on "slightly below" Turning machines caught my attention - exactly the same terminology I have used. I want as many proofs (assertions) as possible about code, and Rice's Theorem is a problem, so slightly below Turing is on the radar. If you are interested, we can discuss this. Shoot me an email at svembu at zoho ...
reply
upvote
chas 6 hours ago
| link |
Total functional programming (and the associated analysis of codata has already been touched on), so I'll just address your interest in static guarantees for space usage. The Virgil programming language[1] has been designed with exactly this in mind. It is aimed at embedded systems where memory is extremely constrained and running out of memory could kill people. Heap allocation is not possible in the language and all data structures are initialized during compilation (like C++ templates, but more sophisticated). The compiler can use the initialization information for advanced optimization and analysis as well as serving as a bound on the memory usage. [2] The language also has some interesting static typing ideas, but they are not as distinct from other languages.
Further discussion on LtU?: http://lambda-the-ultimate.org/node/2131
[1] https://code.google.com/p/virgil/ [2] http://compilers.cs.ucla.edu/virgil/overview.html
reply
upvote
e12e 2 hours ago
| link |
I wonder if a restricted subset of a stack based language like Forth might also work?
reply
winter_blue 8 hours ago
| link |
I was just going to make the same reply.
> if you limit yourself to a language or a sub-set of a language that is not Turing-complete, then you can make static assertions about things such as halting
There's already something that is non-Turing complete, but comes with a halting guarantee -- it's called total functional programming[1]. This paper (linked below) is actually what got me thinking about this whole thing. The author argues in it how functions in functional programming languages aren't really like mathematical functions, because they have the possibility to execute infinitely (never halt) and thus not have a proper return value. To mitigate that, he creates "total functional programming".
Total functions are guaranteed to have a return value (i.e. terminate eventually), but they could take very long to run. If we could actually have complexity bounds, or be able to determine a given function's complexity, that would be a great boon to software engineering (i.e. assuming they adopt it... which is a whole different matter).
The author also makes a very good point that most parts of your program are meant to terminate (i.e. the vast set of functions and algorithms that comprise you program). Only the top-level needs to be Turing-complete, the rest can at least be total.
I actually want to see if it's possible to push this concept further, to create a ring-like hierarchy of languages -- the lowest one being the weakest and the one we can extract the most static assertions out of, and so on. There's a language called Hume[2] that sounds like it accomplishes at least part of this goal, but I haven't really looked into it.
[1] http://www.jucs.org/jucs_10_7/total_functional_programming/j...
[2] https://en.wikipedia.org/wiki/Hume_%28programming_language%2...
reply
creatio 5 hours ago
| link |
For the "complexity assertion" I think you may be referring to something like this: http://resourceanalysis.cs.ru.nl/
Maybe its not totally exactly what you meant, but it can do basic analysis for loops.
reply
---
upvote
rprospero 10 hours ago
| link |
It's not cliche, it's just personal. For instance, most of the code I write is numerical code. To that end, any language without operator overloading is a non-starter, since it's far harder to find a bug in:
div(add(mult(-1,b),sqrt(sub(pow(b,2),mult(mult(3,a),c)))),mult(2,a))
than it is in
(-b+sqrt(b^2-3ac))/(2*a)
On the other hand, if I wrote server code, operator overloading would be far less useful. I'd probably curse any programmer who used it and thank the gods that it was left out of Go.
Conversely, since I write a lot of numerical code, I don't care about generics or typing, which is crucial to many other programmers. Generics don't matter since everything I work with is either a Double or a collection of Doubles. Similarly, static typing doesn't help, since most functions just have the type (Double -> Double) and the type checker can't tell a sine from a logarthim. Of course, the reverse is also true. Since everything is either a double or a collection of doubles, the fancy tricks that dynamic languages offer don't give me a damn thing, so I'm extremely ambivalent about the typing debate.
Of course, on other projects, I've written code that benefited from static typing and I've written valid code that would choke any type checker. I've written code that heavily needed generics. When I did that, I used the languages that had the features I needed.
Go just won't work for the programs I write and it sounds like it doesn't work for yours, either. That's why we won't use Go. I've heard it works wonderfully for a certain class of server software and I'm glad those guys have a useful language for their domain. If I ever have to write another server, I might even teach myself Go. But don't feel guilty that you're using an old hammer instead of the shiny new saw.
---
upvote
termain 9 hours ago
| link |
Ambivalent or apathetic regarding the typing debate?
Typing can become useful in numerical code when you move past operating on scalars. Column and row vectors needn't be confused, a two-vector and a complex number have different types, etc.
Also, physical quantities can have different types and a type system can be useful there.
I totally agree that for numerical code, operator overloading is of great utility.
reply
--
http://stackoverflow.com/questions/3606591/why-does-intellij-idea-compile-scala-so-slowly
" up vote 177 down vote accepted
There are two aspects to the (lack of) speed for the Scala compiler.
Greater startup overhead
Scalac itself consists of a LOT of classes which have to be loaded and jit-compiled
Scalac has to search the classpath for all root packages and files. Depending on the size of your classpath this can take one to three extra seconds.
Overall, expect a startup overhead of scalac of 4-8 seconds, longer if you run it the first time so disk-caches are not filled.
Scala's answer to startup overhead is to either use fsc or to do continuous building with sbt. IntelliJ needs to be configured to use either option, otherwise its overhead even for small files is unreasonably large.
Slower compilation speed. Scalac manages about 500 up to 1000 lines/sec. Javac manages about 10 times that. There are several reasons for this.
Type inference is costly, in particular if it involves implicit search.
Scalac has to do type checking twice; once according to Scala's rules and a second time after erasure according to Java's rules.
Besides type checking there are about 15 transformation steps to go from Scala to Java, which all take time.
Scala typically generates many more classes per given file size than Java, in particular if functional idioms are heavily used. Bytecode generation and class writing takes time.
On the other hand, a 1000 line Scala program might correspond to a 2-3K line Java program, so some of the slower speed when counted in lines per second has to balanced against more functionality per line.
We are working on speed improvements (for instance by generating class files in parallel), but one cannot expect miracles on this front. Scalac will never be as fast as javac. I believe the solution will lie in compile servers like fsc in conjunction with good dependency analysis so that only the minimal set of files has to be recompiled. We are working on that, too.
"
" The fastest to compile was JUnit, followed closely by TestNG? and then ScalaTest?. specs2 was significantly slower. So far the evidence indicates the differences in test compile time are influenced primarily by the number of implicits in scope, the number of by-names used, and whether or not methods are being mixed into the test class by extending a supertrait versus inherited by extending a superclass.
Most likely the influence of the number of implicits in scope is in some way multiplied by the actual number of implicit applications, because at each implicit application the compiler must ensure one and only one implicit “heals” the candidate compiler error. In other words, the compiler can't stop looking at implicits once it finds one that solves the type error at hand; it must keep on looking to make sure one or more other implicits don't also solve that same type error. Thus the compile time cost of implicit use is likely related to the number of implicits in scope multiplied by the number of times the compiler must check all those implicits (i.e., the number of implicit applications). I say “likely,” because as yet we haven't written a script to verify this theory. If someone wants to contribute a new script to the project, this would be a good one to contribute.
Also, most likely it is the number of function literals in general, not just the use of by-names, that influences compile time. But again, we have not actually written a script that tests that theory. This would be another good candidate for a contribution to the project.
Lastly, the compile time cost of mixing in a trait compared to extending a superclass likely exists because the compiler must insert forwarding methods for all the methods declared in the trait into the body of the subclass when mixing in, but when extending a superclass it does not. Instead, the subclass simply inherits the superclass implementations of those methods. We did observe that the larger the number of methods to be mixed in the more the compile time was increased. As JUnit and TestNG? don't use traits (being Java frameworks), this was a difference observed between ScalaTest? and specs2 only. Because specs2's mutable and immutable Specification traits declare more methods than ScalaTest?'s various style traits, the compilation of specs2 styles are being slowed down more by this than that of ScalaTest?'s styles. " -- http://www.artima.com/articles/compile_time.html
---
on the idea of a sublanguage with limited complexity:
(1) ackermann's function; one could presumably handle ackermann's fn in a primitive recursive-like manner by adding a three argument https://en.wikipedia.org/wiki/Hyperoperation (see https://en.wikipedia.org/wiki/Ackermann_function ) (something like https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation ). This leads to the idea of an (infinite, i guess) hierarchy that adds more and more total functions to primitive recursion such that the union of the hierarchy would be the total computable functions.
(2) https://en.wikipedia.org/wiki/Cobham%27s_thesis notes that O(3) is impractical with large problems. So i guess all we need is a notation that lets us go up to O(n^3) (or maybe only to O(n^2), or even to O(n log n)). I guess ideally we could express and separate O(n log n) and lower from O(n^2).
only 2 ways i can immediately think to do it is:
(a) for loops whose upper bound is some polynomial of n. nested for loops are multiplied (b) equivalently, recursion with at least one strictly decreasing natural number which starts out no greater than some polynomial of n
generalizing (b) to mutual recursion, the "strictly decreasing" has to be proved for every potential cycle in the control graph of the functions
note that for cases like where you know for other reasons that something you want to happen is happening (e.g. a list is being sorted so you know it'll be done before the for loop terminates), can use a "forwhile" loop, which is just a for loop that also 'breaks' if a condition becomes false, and which branches on whether, after the for loop terminates, the condition is true or false (in the case of algorithms with a guaranteed complexity, the 'if true' branch would just raise an error).
note: you can also think of a sub-primitive-recursive hierarchy, e.g. https://en.wikipedia.org/wiki/Grzegorczyk_hierarchy , which partitions primitive recursion into subsets such that each subset can't contain anything growing faster than a given level of hyperoperation.
see also www.math.wisc.edu/~robbin/subrecursive-hierarchies.pdf , which i don't yet understand.
--
" Least Fixed Point is P
FO(LFP) is the set of boolean queries definable in FO(PFP) where the partial fixed point is limited to be monotone. That is, if the second order variable is P, then P_i(x) always implies P_{i+1}(x).
We can guarantee monotonicity by restricting the formula \phi to only contain positive occurrences of P (that is, occurrences preceded by an even number of negations). We can alternatively describe LFP(\phi_{P,x}) as PFP(\psi_{P,x}) where \psi(P,x)=\phi(P,x)\vee P(x).
Due to monotonicity, we only add vectors to the truth table of P, and since there are only n^k possible vectors we will always find a fixed point before n^k iterations. Hence it can be shown that FO(LFP)=P. This definition is equivalent to FO(n^{O(1)}).
...
Warning
A query in FO will then be to check if a first-order formula is true over a given structure representing the input to the problem. One should not confuse this kind of problem with checking if a quantified boolean formula is true, which is the definition of QBF, which is PSPACE-complete. The difference between those two problems is that in QBF the size of the problem is the size of the formula and elements are just boolean values, whereas in FO the size of the problem is the size of the structure and the formula is fixed.
" -- https://en.wikipedia.org/wiki/FO%28LFP%29#Least_Fixed_Point_is_P
--- the P-complete problems would seem to form an interesting set of restricted languages for expressing programs
https://en.wikipedia.org/wiki/P-complete
--
http://mathoverflow.net/questions/46350/between-mu-and-primitive-recursion
"The main thing that such a system cannot have is a universal function, provided the system has some basic closure properties... A universal two-place function, for some system, is a function g(i,k) such that for every one-place function f(k) in the system there is some natural number e with λk.f(k)=λk.g(e,k)."
" Robin Chapman's answer is very apropos. Here is a theoretical answer that points out a subtlety in the question. ... For any set A of number theoretic functions, we can define a more general class PR(A) as the smallest class of functions that satisfies the above properties and also includes every function in A.
If every function in A is computable, then every function in PR(A) is computable. Moreover, if every function in A is total then every function in PR(A) is total. ...Therefore, one answer to "I'm curious about how "much" we can compute with a formalism that guarantees halting." is "For any total computable function there is such a formalism" and more generally this is true for any effective sequence of total computable functions. "
https://en.wikipedia.org/wiki/Fast-growing_hierarchy
http://math.stackexchange.com/questions/111773/are-total-recursive-functions-recursively-enumerable
https://en.wikipedia.org/wiki/Template:Formal_languages_and_grammars
--
https://itunes.apple.com/app/lisping/id512138518?mt=8
--
" The second problem with Monads is related to their great strength - they are synonymous with Domain Specific Languages. The mission statement for domain specific languages is stupendous - don't write a complex program to solve your problem, write a simple program in a programming language you have designed solely for that task. I've already mentioned the best use of DSL/Monads - Haskell's parsec module. With Parsec the Haskell function to parse a file is identical to the Backus Naur Form description of the parse grammar - how much clearer could it be? They say that imitation is the highest form of flattery, and every parser function I have written outside of Haskell since meeting Parsec has been a shoddy facsimile of Parsec in the chosen language. The success of Parsec and its ilk has filled Hackage (the Haskell module repository) with hundreds of DSLs covering any task you care to mention.
Yes, literally hundreds of them. Hundreds of little programming languages, one for BNF parsing, one for parsing xml, one for creating PDFs, all perfectly suited to their task. Each is different, and each has its own learning curve. Consider a common task such as parsing an XML file, mutating it according to some JSON you pulled out of a web API and then writing to a PDF. In Ruby or a similar object oriented language you expect to find three APIs/gems, all with a similar object oriented syntax, but for three Haskell DSLs designed for three different tasks to share syntax implies that their authors failed to optimise them for those tasks, hence instead of five minutes with API documentation you have hours of DSL tutorials ahead of you before you can begin work. It is this subtle difference that plants Haskell on the academic side of the town and gown divide. With Haskell the focus is on the language, not the problems you are solving with the language. ...
It would be dishonest not to point out that despite it's utility to hackers, Scheme shares Haskell's unsuitability for production code. The difference is that Scheme is limited by its minimalistic standard library, not by a flaw in the language, and you can "upgrade" to a syntactically similar, but heavyweight, cousin such as Common Lisp, or Clojure, to get work done."
--
scala: "9. Nil/null as an overloaded semantic value (empty list, empty string, value-not-present) - Yes and No, Option[T] is generic. If you want to know if any container(including strings) is empty, you say container.isEmpty(). But not container == null."
-- fixpoint-finding operator?
--
concept from dream:
Turing-complete language + unnecessary subturing language for something to do with variable names (as opposed to fixed registers) + another unnecessary language addition, discovered even later. a feeling lik for value manipulation/e induction/inductive base case/()
mb the first subturing language is for variable naming/ENV manipulation, and the second is for call stack manipulation? the subturing languages would make metaprogramming unnecessary in most cases by allowing you to express most of it using the subturing language?
and of course we have our pattern recognition/matching language, too. perhaps this is like the first one (variable naming?) although i don't think so/am not sure.
and should we have a separate subturing language for creating control flow abstractions e.g. for, continue, break, while? i'm thinking that the above covers it in most cases, and ordinary hygenic (with optional breakout of capitalized identifiers) macros for the rest. hmm, but maybe some logic programming for control flow, possibly via pattern matching?
the dream seemed like the two subturing languages matched the last two cases in primitive recursion (composition and recursion)
--
the use of "zipWith" in haskell's fibonacci fn suggests that indeed we should have syntactic sugar to transform operators into operators that act pointwise on lists, because 'map' only makes an operator pointwise on one of its operands, whereas we may need to make it pointwise on many operands at once:
fib :: [Word32] fib = [0,1] ++ zipWith (+) fib (drop 1 fib)
-- " In Haskell,latch can be defined
latch :: [Bool] -> [Bool] latch x = out x where out ls = [False] ++ zipWith xor ls (out ls) xor n m = (n
| m) && not (n && m) |
and then in Copilot (xor is a built-in operator for Copilot):
latch :: Stream Bool -> Stream Bool latch x = out where out = [False] ++ x `xor` out "
--
scsh, pysh, tclsh
--
" Any sufficiently large program eventually becomes an operating system.
As such, computer scientists should be aware of how kernels handle system calls, paging, scheduling, context-switching, filesystems and internal resource management.
" bayle: ... and access control (dunno if i agree that everybody should kno this stuff but mb oot's designer should)
--
https://en.wikipedia.org/wiki/Confused_deputy_problem
" A confused deputy is a computer program that is innocently fooled by some other party into misusing its authority. It is a specific type of privilege escalation. In information security, the confused deputy problem is often cited as an example of why capability-based security is important.
In the original example of a confused deputy,[1] there is a program that provides compilation services to other programs. Normally, the client program specifies the name of the input and output files, and the server is given the same access to those files that the client has.
The compiler service is pay-per-use, and the compiler program has access to a file (dubbed BILL) where it stores billing information. Clients obviously cannot write into the billing file.
Now suppose a client calls the service and specifies BILL as the name of the output file. The service opens the output file. Even though the client did not have access to that file, the service does, so the open succeeds, and the server writes the compilation output to the file, overwriting it, and thus destroying the billing information. "
--
http://www.joelonsoftware.com/articles/Unicode.html
--
" We introduce a programming paradigm in which statements are constraints over partial orders. A partial order programming problem has the form
minimize u
subject to u1 >= v1 u2 >= v2 ...
where u is the goal, and u1 >= v1, ... is a collection of constraints called the program. A solution of the problem is a minimal value for u determined by values for u1,v1, etc. satisfying the constraints. The domain of values here is a partial order, a domain D with ordering relation >=. "
e.g. a classification problem can be phrased as "find the most specific class such that that class implies the observed properties", a diagnostic program can be phrased as "find the most specific cause which implies the observed effects"
if the constraints are continuous and monotone, then a least fixpoint exists
i think it also says that if the constraints are reductive, then a least fixpoint exists, but i dont have time right now to read more about this.
logic and functional programming can be unified by the partial order programming paradigm in which the partial order specifies allowable substitution
-- http://www.cs.ucla.edu/~stott/pop/Parker.Partial.Order.Programming.ps.Z
--
a problem with XML:
"Transformations, even identity transforms, result in changes to format (whitespace, attribute ordering, attribute quoting, whitespace around attributes, newlines). These problems can make "diff"ing the XML source very difficult."
--
http://axisofeval.blogspot.com/2013/05/a-new-low-in-programming-language.html
" Wednesday, May 8, 2013 A new low in programming language design and implementation The new Wat is the best, most lightweight way to implement a JavaScript?-based programming language I have found so far.
Basically, I get away from JS as quickly and painlessly as possible, and start writing the language in itself.
So I define a very small set of primitives on the joint foundation of Kernel-like first-class lexical environments and fexprs and delimited continuations. Fexprs are a great tool for language-oriented programming, and delimited continuations allow me to escape from the browser's (and Node's) async hell and implement any concurrency and effect system I like.
To fexprs I also add macros. When a macro is used as the operator of a form, the form's code gets changed to the macro's output when the macro is first called, a technique I learned from here. I like macros because they make syntactic abstraction cost-free - with fexprs alone there is always an interpretative overhead. Still, Wat macros, like fexprs, do not work with quoted identifiers, but with first-class values, so many hygiene problems are avoided.
To delimited control I also add classic first-order control (sequential, conditional, loop, throw, catch, finally). This runs on the ordinary JS stack. Only when a continuation is captured does the stack get reified on the heap.
And last but not least, I use a JSON-based syntax for writing the language in itself. At first this was just intended as a quick way to not have to specify a parser for Wat, but I'm starting to like it. It allows Wat to truly be embedded in JavaScript?.
Wat does not have a type tagging or object system. It uses the raw JavaScript? values.
The whole implementation is roughly 350 lines of JavaScript?. After these 350 lines, I can already write Wat in Wat, which is just great. Posted by Manuel Simoni at 18:35 Labels: wat 2 comments:
Joe Taber said...
Is "wat" a reference to this short sarcastic talk that mostly deals with JS? https://www.destroyallsoftware.com/talks/wat
Wed May 08, 08:49:00 PM GMT+2 Manuel Simoni said... Yeah
Wed May 08, 09:17:00 PM GMT+2 "--
https://www.destroyallsoftware.com/talks/wat
--
toread: [July 2013] Backpack: retrofitting Haskell with interfaces, with Scott Kilpatrick, Derek Dreyer and Simon Marlow. An exploration of a mixin-module approach to separate modular development (SMD) for Haskell. In submission.
[July 2012] Safe Haskell, with David Terei, David Mazieres, and Simon Marlow, describes how to make Haskell safe enough to confine and safely execute untrusted, possibly malicious code.
--- http://blog.sigfpe.com/2012/03/overloading-python-list-comprehension.html
this guy points out a feature i didn't know about Python lists, namely, that
" [(y, z) for y in [1, 2] for z in ['a', 'b']]
This isn't quite the same as
[[(y, z) for z in ['a', 'b']] for y in [1, 2]]
but it's close. The latter produces nested lists whereas the first gives one flat list. We can think of nested comprehensions as applying a flattening operation. "
and then shows how to implement that using mapConcat and singleton (basically,
"
concat(map(lambda y: concat(map(lambda z: [(y, z)], ['a', 'b'])), [1, 2]))
Every map has a concat so we can simplify slightly. Let's define:
def concatMap(f, xs): return [f(y) for x in xs for y in x]
def singleton(x): return [x]
Our expression becomes:
concatMap(lambda y: concatMap(lambda z: singleton((y, z)), ['a', 'b']), [1, 2])
") and then suggests that by doing it this way you could generalize list comprehensions beyond lists
if you dont have nested lists though then you can just use map to generalize. Which is much simpler.
--
may want a way to evaluate the value of a variable at compile time and then make it a constant at runtime.. this saves time at runtime
--
doug's notion of a recursive factorialization
--
" I'm not advocating totalitarian FP. What I would love is for people to just take some of the biggest lessons learned and apply them. Hell, if I could just get immutability by default (we all know untamed mutability is dangerous), no untyped null (Hoare's billion-dollar mistake), and truly useful first-class functions (with everything that implies), I'd say that's enough to mollify me for the present. Thinking way, way, way forward, I would like to see people really studying and rethinking programming in general. Just because this is how we are doing this now doesn't mean it's the best way. "Normality" is sort of an accident - I'd love to see a more studied approach. "
--
" upvote
hackinthebochs 2 days ago
| link |
I love coding in C++. The feeling of being so close to the metal that your actions are directly controlling the machine is something that was lost immediately when I moved to Java. It even more of a disconnect in higher level languages.
reply
upvote
twoodfin 2 days ago
| link |
I have sort of the opposite feeling: I love that in C++ I can create very nice abstractions while being fairly confident that the compiler will transform them into code with C-like performance. That is, as long as you know what to avoid (namely: heap allocations and unnecessary copying).
reply
upvote
Pxtl 2 days ago
| link |
This is the thing. This is what gets lost. C++ was designed with the premise that the language allows you to use it both in a C-like context and a Java-like context.
The problem is that some of the baggage of the former interferes with the latter. C++ is danged ugly in some places in ways that could be expressed cleanly in higher-level languages.
That's what C++ coders want - C++ with modules and real generics and fast compile-times and ways to cleanly implement and use garbage-collection if you so choose. A language that lets you say incredibly complicated and detailed things, and then lets you abstract them away into a library or a class or some other unit of organization that lets another coder use your stuff without absorbing the complexity.
All these "C++ replacements" assume that the solution is to gut out all the C-like stuff. What they should be looking to do is giving developers clean alternatives to the C-like stuff, including ways for the hardcore developers to provide their own clean alternatives to the C-like stuff. Don't give us garbage collection, give us the tools to write libraries that provide clean garbage collection to a developer.
reply "
--
"
I'm not an expert but in the gamedev domain, control over memory is fairly vital. It seems like it would be for lots of the other stuff C++ is used with too. In C++ I can allocate all my memory upfront in a pool since I know the exact size of my level, the number of objects and so on. Then use custom allocators/static for just about everything. When I make an object at runtime I can just snag preallocated space from the pool. With the ability to cast the pool's primitive data types to pointers I can even save on any need to index the blocks since I can use the unused memory to point to the next block of unused memory (although it's probably a minor optimization).
Go drops that control totally in favour of it's inbuilt garbage collector which the Go devs think they can just get right. That seems unlikely (the current implementation is apparently quite bad stop-the-world implementation).
Another issue that strikes me is library building. Afaik Go doesn't produce object that can be linked against by non-go stuff. C does this, it has a ABI.
This means I can write one library in C/C++ (and apparently Rust) and then have wrappers for just about every other programming language in existence. (Although C++ can make this painful with things like exceptions http://250bpm.com/blog:4 ). "
--
" That said, I can report on what I prefer right now, knowing what I already know (having spent 4 years programming java professionally as well) - and I truly prefer a more functional approach that comes a lot more easily in languages that support closures and first class functions. It also strikes me that many of the best practices in java / c++ are basically moving towards more functional programming (prefer composition to inheritance, make objects immutable by default, interfaces over classes when possible, dependency injection / referential transparency and parameterizing methods by interfaces e.g functions). "
--
"
upvote
acqq 454 days ago
| link |
Now, what would be the ideal language that would be "better than C" in my opinion? Let's call it X. In my opinion, X would be some modification of C with the following properties: