notes-computer-programming-programmingLanguageDesign-programmingConstructs3

Difference between revision 37 and current revision

No diff available.

there's values and references. and, there's literals and variables. these are analogous entities, but the former two are types of values that can go into variables, whereas the latter two are types of syntactical entites that can go into code (values that can go into ASTs). also, when a variable is encountered in code, the default is to replace it with its value (to override this, u must "quote"), whereas when a reference is encountered in a variable, the default can be that (like in Python), or it can be to leave it as a reference (like in C). The Python method seems clearer b/c you don't have to deal with the distinction b/t vals and refs inside variables. meta project connection?

inversion of control: when the application is called as something like an "event handler" for a framework (rather than the "typical" situation, where the application is in charge of the flow of control)

dependency injection: in OOP: ok, so you have an object with a field that contains a reference to another object. sometimes the first object calls a method on the second object. ok, great, so the implementation of the two objects is decoupled. But at some point, you're going to have to instantiate the first object and the second object, and pass the first object a reference to the second one. this is a special case of inversion of control because the first object is getting called by someone else.

some ways to do this:

service locator: an object that you can query to find other services

i guess a virtue of setter or interface injection is that the framework can swap out services at mid-runtime by calling the component again (if this is allowed..). similarly, a virtue of service locator is that the component can call the framework again later. Avalon's approach (interface injection of a service locator) gives both of these. otoh the service locator should also be provided in the constructor so there isn't two setup phases.

Fowler's article: http://www.martinfowler.com/articles/injection.html notes: in his PicoContainer? constructor injection example, in the configureContainer routine, you didn't have to explicitly register the link from MovieLister? to MovieFinder?; PicoContainer? must have introspected and saw that MovieLister?'s constructor takes a param of type MovieFinder?. the choice of which MovieFinder? implementation to use for MovieFinders? was explicitly made; so PicoContainer? must have used that as the arg for MovieLister?'s constructor. By contrast, in his Spring setter injection example, the fact that MovieLister? needed a MovieFinder? did have to be explicitly declared. Clearly, I prefer to implicit introsective way.

  J's dynamic component system/implementation choosing system will have to overcome issues like this: "If you have multiple ways to construct a valid object, it can be hard to show this through constructors, since constructors can only vary on the number and type of parameters. This is when Factory Methods come into play, these can use a combination of private constructors and setters to implement their work. The problem with classic Factory Methods for components assembly is that they are usually seen as static methods, and you can't have those on interfaces. You can make a factory class, but then that just becomes another service instance. A factory service is often a good tactic, but you still have to instantiate the factory using one of the techniques here."

Fowler opines that "Of course the testing problem is exacerbated by component environments that are very intrusive, such as Java's EJB framework. My view is that these kinds of frameworks should minimize their impact upon application code, and particularly should not do things that slow down the edit-execute cycle. "

aggregation: like consultation, but the object being consulted is just some other object, which persists before and/or after the creation/destruction of the consulting object (i.e. it's not just some private object owned by the consulting object and destroyed when the consulting object is destroyed)

how does Haskell do component programming without typecasts? see: Haskell COM interface at http://www.haskell.org/hdirect/design-7.html. But that's an FFI.

toread: http://portal.acm.org/citation.cfm?id=1294920 A type-level approach to component prototyping. haskell, and also has a useful-looking basis set of functionality for component programming.

http://www.md.chalmers.se/Cs/Research/Functional/Fudgets/Intro/ Haskell GUI approach

A computational model of classical linear logic: Is classical linear logic inherently parallel? (1997)

Release 0.6 of Monadic Constraint Programming We've just released version 0.6 of the Monadic Constraint Programming framework on Hackage.

This release provides a whole lot of generic support for Finite Domain (FD) constraint solvers: a common modeling language and infrastructure for targeting different backends. Very useful if you happen to develop an FD solver and want to hook it up to our framework to benefit from its advanced search capabilities.

Users will of course be much more interested in the actual backends that we provide. Besides the basic Haskell FD solver we had before, there are now three different ways of interfacing Gecode, one of the best FD solvers out there and open source too. To get started, the examples directory shows how to model a number of well-known problems. Posted by Tom Schrijvers at 3:59 PM 0 comments Links to this post Labels: constraints, Gecode, Haskell

Monday, September 7, 2009 EffectiveAdvice?: AOP, mixin inheritance, monads, parametricity, non-interference, ... How to reason about effectful advice? Write your AOP programs in Haskell, the world's best imperative programming language. Use monads and monad transformers for effects and functional mixins for advice. In return you get powerful reasoning tools: equational reasoning and parametricity.

Read the technical report. The appendix has some pretty cool parametricity proofs on non-interference of advice, based on Janis' Voigtlaender's ICFP'09 paper "Free Theorems Involving Type Constructor Classes".

    EffectiveAdvice: Overview, background and proofs
    Bruno Oliveira, Tom Schrijvers and William Cook
    Abstract
    Advice is a mechanism, widely used in aspect-oriented languages, that allows one program component to augment or modify the behavior of other components. Advice is useful for modularizing concerns, including logging, error handling, and some optimizations, that would otherwise require code to be scattered throughout a system. When advice and other components are composed together they become tightly coupled, sharing both control and data flows. However this creates important problems: modular reasoning about a component becomes very difficult; and two tightly coupled components may interfere with the control and data flows of each other.
    This paper presents EffectiveAdvice, a disciplined model of advice, inspired by Aldrich's Open Modules, that has full support for effects in both base components and advice. With EffectiveAdvice, equivalence of advice, as well as base components, can be checked by equational reasoning. The paper describes an implementation of EffectiveAdvice as a Haskell library and shows how to use it to solve well-known programming problems. Advice is modeled by mixin inheritance and effects are modeled by monads. Interference patterns previously identified in the literature are expressed as combinators. Parametricity, together with the combinators, is used to prove two harmless advice theorems. The result is an effective model of advice that supports effects in both advice and base components, and allows these effects to be separated with strong non-interference guarantees, or merged as needed.

Posted by Tom Schrijvers at 9:16 PM 0 comments Links to this post Labels: AOP, Haskell

http://tomschrijvers.blogspot.com/2009/05/dictionaries-eager-or-lazy-type-class.html

http://tomschrijvers.blogspot.com/2009_01_01_archive.html

http://en.wikipedia.org/wiki/Hollywood_Principle

http://en.wikipedia.org/wiki/Object_composition

mock object testing: http://martinfowler.com/articles/mocksArentStubs.html create a fake version of some object, pass it to the rest of the program, and then put code in it to verify that it is being called correctly (method calls in the right order, with the right arguments). this provides "behavior verification", i.e. a test that the program is interacting with the mocked obj in the proper way, rather than just "state verification", i.e. a test that that object ends up in the proper state. for jasper: how can a mock obj be constructed in a statically typed language? perhaps with multi-stage metaprogramming?

note: even though we only have a few types of matched delimiters on the keyboard ( (), {}, [], <> ), you can combine ordinary parens with "modifiers" to make as many semantic types of matched delimiters as you want (i.e. m() , m()m, &( )&, etc). so it's just a matter of what's easier to read and what's easier to type.

should something approximating the capabilities of static methods and static fields in Java be in Jasper? yes, this is a global, but how to do dynamic service locators otherwise?

j: thread-local variables?

j: erlang's reliability, hot-code swapping (should factor in with making the implementation choice dynamic, not just at compile time -- swapping in a new implementation should only break things using those (parts?) of its interfaces which have been removed or whose behavior becomes incompatible)

thinking about how to do a dynamic service locator w/o typecasting. this won't work but: one could have a hetero array that stores tuples (value, type), where the type has to be the type of that value. now you can look at the type of the tuple and that tells you what to cast it to. but this doesn't help b/c the client already knows what it wants; and throwing an error if the 'type' member of the tuple is wrong isn't different from just asking the value what type it is via introspection, and throwing an error if its wrong (or, otherwise, an error will be thrown at cast time if the cast is wrong).

"inheritance anomaly": http://citeseer.ist.psu.edu/old/matsuoka93analysis.html Analysis of Inheritance Anomaly in Object-Oriented Concurrent Programming Languages (1993) a list of a bunch of situations in which synchronization code cannot be inherited


http://en.wikipedia.org/wiki/Category:Concurrent_programming_languages

http://en.wikipedia.org/wiki/Join-calculus_%28programming_language%29

www.informatics.sussex.ac.uk/users/vs/research/paps/anomalySurvey.pdf The inheritance anomaly: ten years after. notes: ok paper. contains survey of some recent sync techniques, and notes about the types of situations in which synchronization code prevents things from being inherited in these languages. conclusion is that by separating sync code from other code, only the sync code needs to be changed. references AOP. implicitly suggests that a programming language should include a number of baked-in "aspects", mutex specs being one of these. what should the others be? logging, per-user/group access control, error handling, what else?


more concurrency models

petri nets

meta-models: bisimulation (equivalence relation on automata)

process logicy primitives: a

b -- can be exec simult, or in any order, a+b -- opponent can choose which to execute, only one is executed, ab -- first a, then b executed in sequence, a CONC b -- type of concurrency is input-dependent. what else? some ideas: a SYNC b -- lower-level sychronization guarantees (todo), a ++ b -- opponent chooses a+b or ab, a ? b -- opponent chooses a ++ b or to do neither of them

vaughan pratt's "Transition and cancellation in concurrency and branching time" boole.stanford.edu/pub/seqconc.pdf

ideas to use graphs in modelling concurrency

partial ordering (dependencies) of operations

graph of computers

graph of processes on computers

temporal unrolling of the previous 2

how to represent relational data in graphs? look at relational bayes nets?

doug's suggestion: decompose as parallel, not concurrent (no inter-process communication except between parent and child at spawn and result time)

arrow's thm for distributed systems? either you have to give up composability, or irreversibility, or what else? lock-free? but don't we have a thm that wait-free is always possible, given enuf memory? mb give up not [memory that grows linearly w/ # of processes]?

wait free, lock free

eiffel's SCOOP extension:

http://scoop.origo.ethz.ch/wiki/Tutorial

" ... All calls to operations on a particular object are handled by a single processor; we say that the processor handles the object. With these basic concepts, we may isolate the difference between sequential and concurrent computation down to a single key point: what happens in the basic operation of O-O computation, a "feature call" of the form

x.f (a)

In a sequential context this is synchronous: computation doesn't proceed until the call to f has been completed. In a concurrent context, if x denotes an object handled by a different processor (different from the processor handling the object on whose behalf the call is being executed), the communication can be asynchronous: computation can just proceed without waiting for f to terminate. That's indeed the whole idea of concurrency: several computations can proceed in parallel, not waiting for each other until they need to. When they indeed "need to" is, in SCOOP, determined not by the programmer but automatically by the SCOOP mechanisms: the processor of the client object will need to resynchronize with the processor in charge of x when its computation requires access to a query on x. This SCOOP policy is called wait by necessity.

To distinguish between synchronous and asynchronous calls, the program must specify whether the processor handling x is the same or another. This leads to the single language extension required by SCOOP: separate declarations. If x represents an object handled by another processor, it will be declared (in Eiffel syntax) not as

x: SOME_TYPE

but as

x: separate SOME_TYPE

This doesn't specify the processor but does specify that it is (or may be) a different one, yielding a different semantics of calls.

For simple reasons of being able to reason about programs, calls on a separate object are exclusive: only one client can use a separate supplier at a time. The mechanism for reserving an object is simply argument passing: a call of the form

     g (x)

or

     b.g (x)

where x is separate, will only proceed when the object attached to x becomes available to the caller; it will then retain that object for the duration of the call. Calls of the above basic form x.f (a), where x is separate, are only permitted when x is such a formal argument of the enclosing routine, here g. This rule guarantees predictability of the code and avoids major mistakes; even for an experienced concurrent programmer, it is very easy - in a context where the rule would not apply - to believe instinctively that in

x.insert (a, some_position) ... y = x.value (some_position)

the element retrieved by the last instruction is the one inserted by the first instruction. But some other separate client may have polluted the structure by squeezing in another insert instruction in-between, even though this is not reflected in the code. Such bugs are very difficult to identify because they are by their very nature transient - the problem will occur only rarely, and in appearance haphazardly. The SCOOP rules guarantee that the above calls may only occur in a routine of which x is a separate argument. So the intuitive expectation that the two calls act on the same object with no competing access in-between - as suggested by the code - indeed matches reality. If this property is not required, the calls to insert and value may just appear in different routines of the class, for a finer level of access control granularity.

The final synchronization mechanism is provided by a natural extension of the Design by Contract constructs of Eiffel. A precondition on a separate target, as in

insert (structure: CONTAINER; element: SOME_TYPE; position: INTEGER) require structure_not_void: structure =/ void structure_not_full: not structure.is_full element_not_void: element /= void valid_position: structure.is_valid_index (position) do ... ensure ... end

cannot keep its usual semantics of a correctness condition, because even if the client ensures it prior to a call some other client can invalidate it before the routine actually starts its execution, so that the routine would be wrong in assuming the precondition. It has to be reinterpreted as a wait condition. Hence a call such as

insert (s, e, p)

will proceed only when s is (as noted before) available on an exclusive basis to the client, and satisfy the precondition. This convention provides a simple and powerful synchronization technique.

These are the basic concepts of SCOOP. They are complemented by a few library mechanisms that tune the mechanism, for example to specify limits on the acceptable waiting time when trying to reserve an object. ... "

in the imperative world (or in jasper's imperative situations), where state can change in mid-procedure, you have not just timeless assertions, but pre-conditions (true at beginning), post-conditions (true at end), other conditions (true at interior points of time), and scoped invariants (true throughout scope). the former three can be captured by seq'd assertions, the first two can be implicitly seq'd by putting assertions at beginning or at end. check eiffel to see if there are other sorts of assertions

OOP system mixing haskell and CLOS: http://lambda-the-ultimate.org/node/3569

"attribute data structures with byte layout details for the compiler to follow, and you can't save that information as publicly accessible assembly details, which .NET also allows." -- http://lambda-the-ultimate.org/node/3569#comment-50883

"Your description of 'Behavior' sounds to me like a form of structural typing. Likewise 'Taxonomy' sounded very similar to nominal typing. The 'Implementation' category sounds maybe like a more powerful version of type qualifiers (final static etc..). " -- http://lambda-the-ultimate.org/node/3569#comment-50616

scoping: transactions, scoped invariants. separate syntax, or force subroutine?

"As an aside, I once got into a convo with one of the most vocal advocates of OSGi. He said he'd "like to see the standard pushed on other platforms like C++ and .NET." I asked him what he meant by that and whether .NET already fulfilled most of OSGi with its core Ecma .NET attributes such as AssemblyInfoAttribute?." -- http://lambda-the-ultimate.org/node/3569

nominative tying and structural typing -- structural typing is like graph shape matching in jasper, and nominative typing is like phantom types or types to denote units -- how to combine? sounds like the obvious thing to do is struct typing with constants in the pattern to be matched serving as "type tags" when nominative typing is desired, i.e. "inches" is declared as a pattern with two children, one a number (edge label 0), and one a metaattribute with edge label "unit" and the constant value "inch". hmmm, or mb better to take numbers themselves, clone them, and add a metaattrib named "unit", rather than adding an additional level of indirection by referencing the underlying number. or maybe combining these by having the metaattribute notation transparently construct and access a wrapper type which stores the underlying value at edge label 0.

might call the composable types used in normal Jasper code "features" b/c ppl associate "type" with what in jasper is the fully specifed type of a variable.


decidable dependent types stuff:

Dependent types are based on the idea of using scalars or values to more precisely describe the type of some other value. For example, "matrix(3,3)" might be the type of a 3×3 matrix. We can then define typing rules such as the following rule for matrix multiplication:

    matrix_multiply : matrix(k,m) × matrix(m,n) ’ matrix(k,n)

where k, m, n are arbitrary positive integer values. A variant of ML called Dependent ML has been created based on this type system, but because type-checking conventional dependent types is undecidable, not all programs using them can be type-checked without some kind of limitations. Dependent ML limits the sort of equality it can decide to Presburger arithmetic; other languages such as Epigram make the value of all expressions in the language decidable so that type checking can be decidable, ...

http://www.cs.nott.ac.uk/~txa/publ/ott-conf.pdf Observational Type Theory (epigram 2)

http://en.wikipedia.org/wiki/Intuitionistic_type_theory something to do with Epigram

http://en.wikipedia.org/wiki/ATS_%28programming_language%29 supercedes Dependent ML

  ATS also has the good idea of token prefixes which indicate boxed or unboxed types

---

linear types uniqueness types

http://en.wikipedia.org/wiki/Charity_%28programming_language%29 more than primitive recursive, but not turing complete; category-theory-esque

http://en.wikipedia.org/wiki/Concurrent_constraint_logic_programming


phantom types: types without any values; used to embed type systems into haskell

manifest typing: when you have to declare types. vs. dynamic typing, and vs. type inference.



http://c2.com/cgi/wiki?ComputationalPrimitives : " , but here's what I think of when I hear of computation primitives:

Imperative language primitives:

object-oriented language primitives:

LogicProgrammingLanguage? primitives:

functional language primitives:

relational language primitives:

Digital logic primitives:

" hua Bloch's 2006 JavaOne? presentation:

 public interface Shop<T> {
	T buy();
	void sell(T item);
	void sell(Collection<? extends T> lot);
	void buy(int numItems, Collection<? super T> myStuff);
 }
 // You can buy a bunch of models from the train shop
 modelTrainShop.buy(5, myModels);
 // You can sell your train set to the model shop
 modelShop.sell(myTrains);

Basic rule:


i think covariance is for code reuse, contravariance for substitution. a similar view:

" Covariance and contravariance : conflict without a cause Castagna, Giuseppe ACM Transactions on Programming Languages and Systems Vol.17, No. 3 (May 1995), pp. 431-447

From the Abstract: (http://citeseer.ist.psu.edu/castagna94covariance.html) (Full text is on http://www.cs.trinity.edu/~mlewis/CSCI3294-F01/Papers/p431-castagna.pdf.)

In type-theoretic research on object-oriented programming, the issue of "covariance versus contravariance" is a topic of continuing debate. In this short note we argue that covariance and contravariance appropriately characterize two distinct and independent mechanisms. The so-called contravariance rule correctly captures the subtyping relation (that relation which establishes which sets of functions can replace another given set in every context). A covariant relation, instead, characterizes the specialization of code (i.e., the definition of new code which replaces old definitions in some particular cases). Therefore, covariance and contravariance are not opposing views, but distinct concepts that each have their place in object-oriented systems. Both can (and should) be integrated in a type-safe manner in object-oriented languages.

We also show that the independence of the two mechanisms is not characteristic of a particular model but is valid in general, since covariant specialization is present in record-based models, although it is hidden by a deficiency of all existing calculi that realize this model. As an aside, we show that the lambda-calculus can be taken as the basic calculus for both an overloading-based and a record-based model. Using this approach, one not only obtains a more uniform vision of object-oriented type theories, but in the case of the record-based approach, one also gains multiple dispatching, a feature that existing record-based models do not capture.

The resulting type system is similar to that of CecilLanguage?. "


defn of contra vs. covariance:

" Say you have a class Foo, which has a method bar(). Method bar() takes an argument of type middle_type, and returns a value of type middle_type.

Now you make a subclass of Foo called SubFoo?, and you override bar(). What types can the new bar() take? What types can it return?

Look at return types first: we want to be able to substitute SubFoo? where existing code expects Foo, so it needs to return things of type middle_type, or of some subtype (e.g. sub_type). This should be pretty obvious.

As for what types our new bar() can take: One answer is: bar() can take only things of type middle_type. You can't declare it to take sub_type, and you can't declare it to take super_type. This is called invariance.

Another answer: bar() can only be declared to take things that are a subtype of middle_type - so middle_type is OK, and sub_type is OK, but super_type is out. This is called covariance.

Finally, the third answer: bar() can only be declared to take things that are a supertype of middle_type - so any of middle_type, and super_type may be passed to bar(), but sub-type is not allowed. This is called contravariance.

Covariance seems to jibe with our notion that subclasses are more specialized, less general than their superclasses. So you might have a Collection class which takes and returns Objects; you could subclass it to make a FooCollection? class which takes and returns Foos (where Foo is a subclass of Object).

Contravariance sounds kind of counterintuitive at first, but it's actually just analogous to the famous advice about implementing protocols: "Be liberal in what you accept, and conservative in what you send." So just as you have to return a subtype of the original bar()'s return type, you have to accept any supertype of whatever the original bar() accepts.

If your type system enforces contravariance of parameters, then it can tell at compile time whether your code is typesafe (cf. SatherLanguage?, ObjectiveCaml?). If it enforces covariance (Cf. cf. EiffelLanguage?, CeePlusPlus?), it can't really do that, but it can make some good guesses (cf. EiffelLanguage?) - though Eiffel are known to crash when it guesses wrong. " -- http://c2.com/cgi/wiki?ContraVsCoVariance


http://www.modernperlbooks.com/mt/2009/04/the-why-of-perl-roles.html http://www.modernperlbooks.com/mt/2009/05/perl-roles-versus-inheritance.html http://www.modernperlbooks.com/mt/2009/05/perl-roles-versus-duck-typing.html http://www.modernperlbooks.com/mt/2009/05/perl-roles-versus-interfaces-and-abcs.html http://www.modernperlbooks.com/mt/2009/05/more-roles-versus-duck-typing.html


" A trait is different from a mixin in that its individual methods can be ma- nipulated with trait operators such as sum (merge the methods of two traits), exclude (remove a method from a trait), and alias (add a copy of a method with a new name; do not redirect any calls to the old name). The practical difference between mixins and " -- http://www.cs.utah.edu/plt/publications/aplas06-fff.pdf Scheme with Classes, Mixins, and Traits

mb mixins dont have state either? "The classes BaseWidget?, Widget and Label have state and they take the role of base classes, not mixins." -- http://www.artima.com/weblogs/viewpost.jsp?thread=246341

assertion that Component Architecture means using composition instead of inheritance "Actually, Zope 3 has been entirely rewritten with the goal of avoiding the mixin abuse of Zope 2 and to use composition instead of inheritance (this is basically what the buzzwords Component Architecture really mean)." -- http://www.artima.com/weblogs/viewpost.jsp?thread=246341

http://www.artima.com/weblogs/viewpost.jsp?thread=246341 asserts that mixins lead to classes with zillions of methods, leading to method name collisions:

" For instance, have a look at the hierarchy of the Plone Site class which I report in appendix. Between square backets you can see the number of methods/attributes defined per class, except special attributes. The plot comes from a real Plone application I have in production. The total count is of 38 classes, 88 names overridden, 42 special names and 648 regular names: a monster.

To trace the origin of the methods and to keep in mind the hierarchy is practically impossibile. Moreover, both autocompletion and the builtin help facility become unusable, the self-generated class documentation become unreadable since too big.

In other words, a design based on mixins works for small frameworks, but it does not scale at all to large frameworks. Actually, Zope 3 has been entirely rewritten with the goal of avoiding the mixin abuse of Zope 2 and to use composition instead of inheritance (this is basically what the buzzwords Component Architecture really mean).

My hate for mixins comes from my experience with Zope/Plone. However the same abuses could be equally be done in other languages and object systems - with the notable exception of CLOS, where methods are defined outside classes and therefore the problem of class namespace pollution does not exist - in the presence of huge frameworks.

A consequence of namespace pollution is that it is very easy to have name clashes. Since there are hundreds of methods and it is impossible to know all of them, and since method overriding is silent, this is a real problem: the very first time I subclassed a Plone class I run into this issue: I overrode a pre-defined method inadvertently, by causing hard to investigate problems in an unrelated part of the code. "

suggests using generic fns instead, and modules for namespace control:

"I am a big fan of generic functions which are already used in the Python word - print is a generic function, the comparison operators are generic functions, numpy universal functions (ufunctions) are generic functions, etc - but should be used even more. With generic functions, mixins becomes useless. A side effect is that the class namespace becomes much slimmer: for instance, in CLOS classes are used just to contain state, whereas the methods live in a separate namespace. In most languages instead, classes are used as a namespace control mechanism, performing double duty - namespace control should be the job of modules." http://www.artima.com/weblogs/viewpost.jsp?thread=246341


python's super

" There is no superclass in a MI world

Readers familiar will single inheritance languages, such as Java or Smalltalk, will have a clear concept of superclass in mind. This concept, however, has no useful meaning in Python or in other multiple inheritance languages. I became convinced of this fact after a discussion with Bjorn Pettersen and Alex Martelli on comp.lang.python in May 2003 (at that time I was mistakenly thinking that one could define a superclass concept in Python). Consider this example from that discussion:

           +-----+
           |  T  |
           |a = 0|
           +-----+
         /         \
        /           \
    +-------+    +-------+
    |   A   |    |   B   |
    |       |    | a = 2 |
    +-------+    +-------+
        \           /
         \         /
           +-----+
           |  C  |
           +-----+
              :
              :    instantiation
              c
    >>> class T(object):
    ...     a = 0
    >>> class A(T):
    ...     pass
    >>> class B(T):
    ...     a = 2
    >>> class C(A,B):
    ...     pass
    >>> c = C()

What is the superclass of C? There are two direct superclasses (i.e. bases) of C: A and B. A comes before B, so one would naturally think that the superclass of C is A. However, A inherits its attribute a from T with value a=0: if super(C,c) was returning the superclass of C, then super(C,c).a would return 0. This is NOT what happens. Instead, super(C,c).a walks trought the method resolution order of the class of c (i.e. C) and retrieves the attribute from the first class above C which defines it. In this example the MRO of C is [C, A, B, T, object], so B is the first class above C which defines a and super(C,c).a correctly returns the value 2, not 0:

    >>> super(C,c).a
    2

You may call A the superclass of C, but this is not a useful concept since the methods are resolved by looking at the classes in the MRO of C, and not by looking at the classes in the MRO of A (which in this case is [A,T, object] and does not contain B). The whole MRO is needed, not just the first superclass.

So, using the word superclass in the standard docs is misleading and should be avoided altogether. Bound and unbound (super) methods

Having established that super cannot return the mythical superclass, we may ask ourselves what the hell it is returning ;) The truth is that super returns proxy objects. " --- http://www.artima.com/weblogs/viewpost.jsp?thread=236275

involving the MRO (method resolution order) leads to all sorts of tricky results:

"

Remember to use super consistently

Some years ago James Knight wrote an essay titled Super considered harmful where he points out a few shortcomings of super and he makes an important recommendation: use super consistently, and document that you use it, as it is part of the external interface for your class, like it or not. The issue is that a developer inheriting from a hierarchy written by somebody else has to know if the hierarchy uses super internally or not. For instance, consider this case, where the library author has used super internally:

  1. library_using_super

class A(object): def __init__(self): print "A", super(A, self).__init__()

class B(object): def __init__(self): print "B", super(B, self).__init__()

If the application programmer knows that the library uses super internally, she will use super and everything will work just fine; but it she does not know if the library uses super she may be tempted to call A.__init__ and B.__init__ directly, but this will end up in having B.__init__ called twice!

    >>> from library_using_super import A, B
    >>> class C(A, B):
    ...     def __init__(self):
    ...         print "C",
    ...         A.__init__(self)
    ...         B.__init__(self)
    >>> c = C()
    C A B B

On the other hand, if the library does not uses super internally,

  1. library_not_using_super

class A(object): def __init__(self): print "A",

class B(object): def __init__(self): print "B",

the application programmer cannot use super either, otherwise B.__init__ will not be called:

    >>> from library_not_using_super import A, B
    >>> class C(A,B):
    ...     def __init__(self):
    ...         print "C",
    ...         super(C, self).__init__()
    >>> c = C()
    C A

So, if you use classes coming from a library in a multiple inheritance situation, you must know if the classes were intended to be cooperative (using super) or not. Library author should always document their usage of super. Argument passing in cooperative methods can fool you

James Knight devolves a paragraph to the discussion of argument passing in cooperative methods. Basically, if you want to be safe, all your cooperative methods should have a compatible signature. There are various ways of getting a compatible signature, for instance you could accept everything (i.e. your cooperative methods could have signature *args, kw) which is a bit too much for me, or all of your methods could have exactly the same arguments. The issue comes when you have default arguments, since your MRO can change if you change your hierarchy, and argument passing may break down. Here is an example:

"An example of argument passing in cooperative methods"

class A(object): def __init__(self): print 'A'

class B(A): def __init__(self, a=None): print 'B with a=%s' % a super(B, self).__init__(a)

class C(A): def __init__(self, a): print 'C with a=%s' % a super(C, self).__init__()

class D(B, C): def __init__(self): print 'D' super(D, self).__init__()

>>> from cooperation_ex import D >>> d = D() D B with a=None C with a=None A

This works, but it is fragile (you see what will happen if you change D(B, C) with D(C, B)?) and in general it is always difficult to figure out which arguments will be passed to each method and in which order so it is best just to use the same arguments everywhere (or not to use cooperative methods altogether, if you have no need for cooperation). There is no shortage of examples of trickiness in multiple inheritance hierarchies; for instance I remember a post from comp.lang.python about the fragility of super when changing the base class.

Also, beware of situations in which you have some old style classes mixing with new style classes: the result may depend on the order of the base classes (see examples 2-2b and 2-3b in Super considered harmful). " -- http://www.artima.com/weblogs/viewpost.jsp?thread=237121

Simionato concludes that the problem is with multiple inheritance itself; he prefers generic multimethods, traits, or mixins.

he defines mixins and traits as so:

" I personally liked super, cooperative methods and multiple inheritance for a couple of years, then I started working with Zope and my mind changed completely. Zope 2 did not use super at all but is a mess anyway, so the problem is multiple inheritance itself. Inheritance makes your code heavily coupled and difficult to follow (spaghetti inheritance). I have not found a real life problem yet that I could not solve with single inheritance + composition/delegation in a better and more maintainable way than using multiple inheritance. Nowadays I am very careful when using multiple inheritance.

People should be educated about the issues; moreover people should be aware that there are alternative to multiple inheritance in other languages. For instance Ruby uses mixins (they are a restricted multiple inheritance without cooperative methods and with a well defined superclass, but they do not solve the issue of name conflicts and the issue with the ordering of the mixin classes); recently some people proposed the concepts of traits (restricted mixin where name conflicts must be solved explicitely and the ordering of the mixins does not matter) which is interesting.

In CLOS multiple inheritance works better since (multi-)methods are defined outside classes and call-next-method is well integrated in the language; it is simpler to track down the ancestors of a single method than to wonder about the full class hierarchy. The language SML (which nobody except academics use, but would deserve better recognition) goes boldly in the direction of favoring composition over inheritance and uses functors to this aim.

" http://www.artima.com/weblogs/viewpost.jsp?thread=237121

here's a post in which he argues for plain multiple-dispatch functions over mixins: http://www.artima.com/weblogs/viewpost.jsp?thread=237764

altho i note: then you have to add multimethod cases to the functions when you define a new class that would otherwise have used a different mixin. if you don't own the code for the multimethod fn, you're in trouble (unless it can be overridden remotely for a given type)


http://www.muthukadan.net/docs/zca.html


according to http://www.holub.com/goodies/uml/:

" Aggregation (comprises) relationship relationship.1 Destroying the "whole" does not destroy the parts.

Composition (has) relationship.1 The parts are destroyed along with the "whole." "

and

" (1) Composition vs. Aggregation: Neither "aggregation" nor "composition" really have direct analogs in many languages (Java, for example).

An "aggregate" represents a whole that comprises various parts; so, a Committee is an aggregate of its Members. A Meeting is an aggregate of an Agenda, a Room, and the Attendees. At implementation time, this relationship is not containment. (A meeting does not contain a room.) Similaraly, the parts of the aggregate might be doing other things elsewhere in the program, so they might be refereced by several objects. In other words, There's no implementation-level difference between aggregation and a simple "uses" relationship (an "association" line with no diamonds on it at all). In both cases an object has references to other objects. Though there's no implementation difference, it's definitely worth capturing the relationship in the UML, both because it helps you understand the domain model better, and because there are subtle implementation issues. I might allow tighter coupling relationships in an aggregation than I would with a simple "uses," for example.

Composition involves even tighter coupling than aggregation, and definitely involves containment. The basic requirement is that, if a class of objects (call it a "container") is composed of other objects (call them the "elements"), then the elements will come into existence and also be destroyed as a side effect of creating or destroying the container. It would be rare for a element not to be declared as private. An example might be an Customer's name and address. A Customer without a name or address is a worthless thing. By the same token, when the Customer is destroyed, there's no point in keeping the name and address around. (Compare this situation with aggregation, where destroying the Committee should not cause the members to be destroyed---they may be members of other Committees).

In terms of implementation, the elements in a composition relationship are typically created by the constructor or an initializer in a field declaration, but Java doesn't have a destructor, so there's no way to guarantee that the elements are destroyed along with the container. In C++, the element would be an object (not a reference or pointer) that's declared as a field in another object, so creation and destruction of the element would be automatic. Java has no such mechanism. It's nonetheless important to specify a containment relationship in the UML, because this relationship tells the implementation/testing folks that your intent is for the element to become garbage collectable (i.e. there should be no references to it) when the container is destroyed. "


"synchronization": "As we shall see, the purpose of a synchronization algorithm is to en- force precedence relations among operation executions." -- Arbiter-Free Synchronization Distributed Computing 16, 2/3, (2003) 219-237.


http://scienceblogs.com/goodmath/2006/10/haskell_and_scheme_which_one_a.php ---

in go, a method can satisfy an interface regardless of whether it modifies the input args?:

www.stanford.edu/class/ee380/Abstracts/100428-pike-stanford.pdf " type Stringer interface { String() string } func print(args ...Stringer) { for i, s := range args { if i > 0 { os.Stdout.WriteString?(" ") } os.Stdout.WriteString?(s.String()) } } print(Day(1), Fahrenheit(72.29)) => Monday 72.3°F Again, these methods do not take a pointer, although another type might define a String() method that does, and it too would satisfy Stringer. "

---

http://en.wikipedia.org/wiki/Cyclone_programming_language

--- " Not sure if this

Not sure if this is the type of criticism you are looking for (these are mostly engineering type problems) but the reasons I still program in non-functional languages are:

Lack of functional polymorphism leads to namespace crowding, i.e. no operator overloading. Type classes help address this somewhat, but it is still a pain point.

Lack of sugar. Despite having clearly inferior support for map/filter/reduce, most scripting languages provide cleaner string processing functionality because of OO-enabled operator overloading and plenty of sugar.

State. Again is an obvious one, but the lack of state is also a plus for pure functional languages. It is a trade-off, and is one of the main features that makes pure functional languages unique. For me this is clearly a feature and not a bug. I do think though, that many of the common problems caused by the lack of state could be addressed with a good helping of sugar on the part of the compiler. " -- http://lambda-the-ultimate.org/node/3924#comment-58889

http://www.haskell.org/haskellwiki/DDC

toread: http://lambda-the-ultimate.org/node/2700 http://conal.net/papers/icfp97/ http://lambda-the-ultimate.org/node/3924#comment-59037 http://lambda-the-ultimate.org/node/3924

---

Add me to the long list of functional programming loyalists who really hopes that the monad complexity issues is resolved or mitigated. Having five million slight variation of the state monad is a black mark on Haskell and the antithesis of clean, generic, extensible design.

Still love it anyways.

Posted by: Matt Skalecki

-- http://scienceblogs.com/goodmath/2009/11/philosophizing_about_programmi.php#comment-2063737 ---
November 10, 2009 4:31 PM

In the .NET world, "command/query separation" seems to be among the major buzzwords. Every method should either return information about the current state (query) or change that state (command); queries should never change the objects they're querying.

-- http://scienceblogs.com/goodmath/2009/11/philosophizing_about_programmi.php#comment-2063771

--- Which is a great principle to apply to one's programming, but it continues to piss me off that slavish adherence to c/q separation makes it impossible to write a proper "pop" method for a stack. --- http://scienceblogs.com/goodmath/2009/11/philosophizing_about_programmi.php#comment-2063832

--- should try to understand this someday: http://scienceblogs.com/goodmath/2009/11/philosophizing_about_programmi.php#comment-2063906

--- 18

" @9:

    If a function mutates local variables, but does not mutate global state, for all practical purposes it is just as good as a stateless function. From the callers perspective it is stateless.

Indeed, and Haskell's type system is rich enough to express this constraint. See http://www.haskell.org/haskellwiki/Monad/ST#A_few_simple_examples

Posted by: Dan

November 10, 2009 7:41 PM

" -- http://scienceblogs.com/goodmath/2009/11/philosophizing_about_programmi.php#comment-2064185

me: yes, but... the other guy's point is that having to use the ST monad for that isn't quite as convenient.

toread: http://cs.anu.edu.au/~Ben.Lippmeier/project/thesis/thesis-lippmeier-sub.pdf

---

look AGAIN at closures, coroutines

---

There was more to T than implementation technology; there was also a lot of beautiful language design happening. Jonathan seized the opportunity to make a complete break with backwards compatibility in terms of the runtime library and even the names chosen. Somewhere in the T 2 effort, Kent Pitman, another Lisp wizard, came down to Yale from MIT. He and Jonathan poured an immense amount of design effort into the language, and it was just really, really *clean*. Small (but difficult) things: they chose a standard set of lexemes and a regular way of assembling them into the names of the standard procedures, so that you could easily remember or reconstruct names when you were coding. (I have followed this example in the development of the SRFIs I've done for the Scheme community. It is not an easy task.)

Larger, deeper things: they designed a beautiful object system that was integrated into the assignment machinery -- just as Common Lisp's SETF lets you assign using accessors, e.g., in Common Lisp (setf (car x) y) is equivalent to (rplaca x y) in T, (set! (car x) y) was shorthand for ((setter car) x y) Accessor functions like CAR handled "generic functions" or "messages" like SETTER -- CAR returned the SET-CAR! procedure when sent the SETTER message. The compiler was capable of optimising this into the single Vax store instruction that implements the SET-CAR! operation, but the semantic machinery was completely general -- you could define your own accessor procedures, give them SETTER methods, and then use them in SET! forms.

(This turned out to be very nice in the actual implementation of the compiler. The AST was a tree of objects, connected together in both directions -- parents knew their children; children also had links to their parents. If the optimiser changed the else-part of an if-node N with something like this (set! (if-node:else n) new-child) which was really ((setter if-node:else) n new-child) the if-node:else's SETTER method did a lot of work for you -- it disconnected the old child, installed NEW-CHILD as N's else field, and set NEW-CHILD's parent field to be N. So you could never forget to keep all the links consistent; it was all handled for you just by the single SET! assignment.)

-- http://www.paulgraham.com/thist.html

--- on the need for optional single-use continuations even when multiple use ones are offered: http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations-original.html

good defn of "dynamic contours"?

--- http://axisofeval.blogspot.com/2010/04/dylan-and-lisp-family-trees-central.html

" Monday, April 26, 2010 Dylan and the Lisp family tree's central branch Since the early eighties (beginning with Scheme and T), most Lisps began to settle around a common core.

(This also coincides with the point in time when static scoping was finally understood, once and for all, after a painful and embarrassing history.)

With the exception of Scheme, most Lisps don't have multi-shot continuations. They seriously complexify a language, as can be seen in weird implementation techniques like Cheney on the MTA.

It's also hard (or even impossible?) to make a good UNWIND-PROTECT in the face of multi-shot continuations. And UNWIND-PROTECT is surely one of the most important control flow operators.

So what is the common core I'm talking about? You can see it best in Dylan, I think.

First of all, a Lisp that follows this common core design can be efficiently implemented on real hardware. No need to do weird stuff like stack copying or Cheney on the MTA. In fact, Dylan has, with a bit of squinting, the same dynamic (control flow) semantics as C.

Second, the common core is simply nice to program in.

Some features of the common core:

This core can be seen in languages that I consider the central branch of the Lisp family tree:

All of them are great languages, and worth detailed study.

In the future I hope to write about each of the features these languages share in more detail. Posted by Manuel J. Simoni at 3:53 PM Labels: cl, continuations, dylan, eulisp, goo, islisp, lisp, oop, plot "

---

"You know it is right when both simplicity and power are maximized, while at the same time confusion and the need for kludges are minimized.

PLOT emphasizes cleanliness, flexibility, and extensibility." -- http://users.rcn.com/david-moon/PLOT/page-1.html


more on PLOT:

http://users.rcn.com/david-moon/PLOT/Moon-ILC09.pdf

http://news.ycombinator.com/item?id=537652

--

9 points by gruseom 407 days ago

link

Wow, this is interesting on many levels. The slides from ILC are a good overview: http://users.rcn.com/david-moon/PLOT/Moon-ILC09.pdf. They contain this fascinating statement:

Traditionally, code walking has required ad hoc code to understand every “special form.” It is better to have a well-defined, object-oriented interface to the Abstract Syntax Tree, scopes, and definitions. This is why objects are better than S-expressions as a representation for program source code.

I have never heard anyone point to s-expressions as the reason code walkers are hard to write in Lisp (and they are, at least in CL).

---

http://users.rcn.com/david-moon/PLOT/page-2.html

General Principles " Everything about the language is to be defined in the language itself. The language is to be fully extensible by users, with no magic. Of course to be implementable this requires bootstrapping.

Anything that must be intrinsically known to the compiler is explicitly marked.

Define as much as possible by the binding of names to values.

Use naming conventions in place of Common Lisp's multiple name-binding spaces for functions, variables, types, and packages.

Discourage assignment, and make it syntactically obvious where it is allowed or occurs. But do not forbid it.

Discourage unnecessary type declarations, but allow them where needed.

Use hygienic macros pervasively.

Language-enforced access control is incompatible with macros and with debugging within the language, so do not attempt to provide that type of feature. Use naming conventions and social enforcement instead.

Arithmetic operations must always produce mathematically correct results. No insanity like (x + y) / 2 is sometimes a negative number when both x and y are positive.

Strive for programs to be readable yet concise. Hence use infix syntax, case-insensitive names, and nesting structure indicated by indentation rather than punctuation.

Minimize the use of punctuation and maximize the use of whitespace, for readability.

Avoid abbreviation, but when you must abbreviate, do it consistently.

Strive for simple concepts that combine in powerful ways. Keep removing unnecessary complex features until no more can be removed.

Take full advantage of classes and methods.

Do not conflate the concepts of class, module, scope, and encapsulation. Use simple concepts that combine in powerful ways instead of one overly powerful concept that tries to do everything. "

ELL kernel: http://github.com/manuel/ell/blob/master/KERNEL.org

--

" Here's what Ell is made of:

 ---

http://research.sun.com/projects/plrg/Publications/ICFPAugust2009Steele.pdf

more steele pubs at http://labs.oracle.com/projects/plrg/Publications/


" IMO, Python is farther away from the Lisp genotype than Java. At least Java has post-1980's scoping rules. " -- http://axisofeval.blogspot.com/2010/05/next-lisps.html

"2 comments:

fogus said...

    I'm enjoying your blog so far, even though I get the impression that my leg is being pulled at times. ;-) It's definitely hard to dispute that the specific Clojure code linked to is ugly, but I will say that's it's not representative of the entire look and feel. The annotation support, for better or worse, is a necessity given that Clojure strives to interoperate with Java. Many of the interop forms are less than ideal simply because they require a lot of Java's semantics to taint the pool -- so to speak. Thankfully, the division between interop forms and pure Clojure is very clear and generally allow the ugly bits to be hidden away or outright avoided.
    :f
    Mon May 10, 05:38:00 PM CEST " http://axisofeval.blogspot.com/2010/05/next-lisps.html commenting on http://gist.github.com/377213

" Function passing

Ruby and Python let us pass functions as first-class objects. In Python, I pass functions around, uncalled, without much ado. It’s beautiful. I feed one into another as a normal variable. No special syntax required. I call a function by appending parentheses. Just like in math. It’s really a breath of fresh air coming from Ruby:

f = lambda x, y: x + y 3 - y f

  1. => <function <lambda> at 0x100481050>

f(2, 3)

  1. => 26

reduce(f, [2, 3, 4], 0)

  1. => 90

In Ruby, things are more complicated. I need an ampersand to pass a function and brackets to call it:

f = lambda {

f
x, yx + y 3 - y }
  1. => #<Proc:0x000001012a8fc8@(irb):32 (lambda)>

f.call(2, 3)

  1. => 26
  2. Brackets are syntactic sugar for #call: f[2, 3]
  3. => 26

[2, 3, 4].reduce(0, &f)

  1. => 90

The brackets, at least, have never felt natural to me.

More importantly (and annoyingly), if I define a top-level method with def—let’s call those top-level def methods (TLDMs)—Ruby won’t let me pass it as a block to any other method. (TLDMs actually belong to an object, so strictly speaking, this makes sense. It’s still annoying.) In Python, we can pass lambdas and TLDMs like they’re identical.

So Ruby makes function-passing doable. Python makes it absolutely painless. Functions, methods, objects

Out of the box, Python gives us great tools for functional programming goodness:

    len(coll)
    map(f, coll)
    reduce(f, coll, i)
    filter(f, coll)
    str(obj)

Not all the Lisp functions are functions in Python, though—and this was a shocker for me. Some are instance methods. (Examples: capitalize(), reverse().) Actually, it’s hard to guess when Python will go one way or the other. It seems to follow convenience and tradition more than any kind of rationale, and that makes coding a little confusing. Sometimes it’s even hard to guess what the right receiver is. To join a list, for example, you have to pass the list to a string! (.join(['my', 'list', 'here']).) In whose head did that make sense?

This is a problem because Python treats methods and functions differently. You can pass functions to other functions. But you can’t pass instance methods. (Update. Apparently you can, but it is not intuitive.) I can stringify a list by saying map(str, numbers), because str() happens to be a function that I can map with. But I can’t capitalize a list in that way, because capitalize() is a method. In this last case, I’d have to use a list comprehension to capitalize the list, just because of how the library is designed.

In Ruby, equally a Lisp, every function has a receiver and is really no function at all. (Lambdas come close.) It’s actually a method. There is never a question about this for Ruby data structures. “Is it capitalize string or string.capitalize?” is a question that just doesn’t come up. To a lot of Rubyists, it seems frightening that you would ever call len on an object “from the outside”. The Ruby equivalents of the above are:

    coll.length
    coll.map &f
    coll.reduce i, &f
    coll.select &f
    obj.to_s

Ruby lets us use methods on other methods and on lambdas, so I can do the following without a problem. (Note: methods need a colon after the ampersand because you’re naming a method to call, not passing an actual method object.)

str = lambda {

xx.to_s }

numbers = [1, 2, 3] strings = ['a', 'b', 'c']

numbers.map &str

  1. => ['1', '2', '3']

strings.map &:capitalize

  1. => ['A', 'B', 'C']

Blocks

In Ruby, blocks unify anonymous functions, closures, and iteration. On top of that, they make chaining wicked simple. Python gives us the tools to do all of the same things as Ruby. But it is not unified. I’ll have a meaty example for you in the next article, proving every word I’ve just said. Explicitness

A poem:

ruby is zen python is explicit python is explicit about its explicitness return python

Miscellaneous

The differences I just listed are most important to me. Other people might include a couple of other biggies:

    Ruby lacks list comprehensions
    Python doesn’t give us literals for regular expressions
    Python distinguishes attributes and methods; Ruby doesn’t" -- http://wit.io/posts/ruby-and-python-pivot-points

ntoshev 749 days ago

link

Personally I like generators, generator expressions and list comprehensions much more than the Ruby equivalents (chaining each/filter/etc). Python is cleaner overall, and if you want metaprogramming you can still do a lot of it. Also Python has better libraries and runs on App Engine.

-- http://news.ycombinator.com/item?id=682171

I wish Ruby were good, but it's so fucked:

  Matz's decision-making process
    He tries to make Ruby be all things to all people
      Lots of confusing sugar and overloading baked in
    I much prefer Guido's hard pragmatism
  The pointless panoply of function types: Methods, Blocks, Procs, Lambdas
    All intertwined and yielding into one another.
    I love that in Python there is only one:
      Objects with a __call__ method, defined metacircularly.
  The culture of adding/overloading methods on base classes
    Many gems do this en masse, and there are a lot of low-quality gems
      Seriously, Ruby might have a ton of new gems for everything, but they
      are almost universally awful. A culture of sharing any code that *could*
      be a module, no matter how trivial, leads to immature crap being widely
      used because it was there already, with a mess of forks to clean up
      afterwards. At least most of them are test-infected...
        Python does come with a few stinkers, mostly ancient syscall wrappers.
    Especially disastrous because it's unscoped, and infects the whole process
      For a language with four scoping sigils it sure fucks up scope a lot
    The syntax practically begs you to do it, anything else would look shitty
  The Matz Ruby Implementation
    The opposite of turtles-all-the-way-down (Smalltalk crushed beneath Perl)
    It actively punishes you for taking advantage of Ruby's strengths
    The standard library is written almost entirely in C
      It doesn't use Ruby message dispatch to call other C code.
      That means that if you overload a built-in, other built-ins won't use it
    Anything fiddly that's not written in C will be dog slow

--- http://news.ycombinator.com/item?id=682305

http://www.zedshaw.com/blog/2009-05-29.html talks about library inconsistent conventions between functinos and their inverses (and near-inverses) in Python, e.g. mystuff.remove(mything) vs. mystuff.append(mything) vs. del mystuff[4]

---

in ruby, method chaining looks like this:

poem.lines.to_a.reverse and

(you can do this in python, with added ()s, but only when all the stuff you are trying to do are methods and not just functions)


in ruby, convention is that square brackets are "subset of" operator, e.g. Dir["/*.txt"]

in ruby, {} is the constructor for both blocks and hashes

---

module MyModule? class MyClass? def my_method 10.times do if rand < 0.5 p :small ennnnnd

-- idea from: http://redmine.ruby-lang.org/issues/5054

" I know matz's already rejected a python-style block. He wrote:

> it works badly with > * tab/space mixture > * templates, e.g. eRuby > * expression with code chunk, e.g lambdas and blocks http://www.ruby-forum.com/topic/108457

"

http://blog.peepcode.com/tutorials/2010/what-pythonistas-think-of-ruby#fn3


" A Simple Scenario

Let's say you're running a hot dating site, and a certain user, "Angel", just looked at another user, "Buffy." And here's one tiny piece of your program: When Angel views Buffy

    Figure out their match score
    Request a new, next match for Angel to look at.
    Record that Angel stalked Buffy, and get back the last time it happened.
    Send Buffy an email that Angel just looked at her, but only if:
        they're a good match, and
        they haven't looked at each other recently.

This isn't very complicated logic. In our pre-async minds, our code looks something like this:

handleVisit : function(angel, buffy) { var match_score = getScore(angel, buffy); var next_match = getNextMatch(angel); var visit_info = recordVisitAndGetInfo(angel, buffy); if (match_score > 0.9 && ! visit_info.last_visit) { sendVisitorEmail(angel, buffy); } doSomeFinalThings(match_score, next_match, visit_info); }

But of course these are all blocking calls requiring callback events. So our code ends up like this:

handleVisit : function(angel, buffy) { getScore(angel, buffy, function(match_score) { getNextMatch(angel, function(next_match) { recordVisitAndGetInfo(angel, buffy, function(visit_info) { if (match_score > 0.9 && ! visit_info.last_visit) { sendVisitorEmail(angel, buffy); } doSomeFinalThings(match_score, next_match, visit_info); }); }); }); }

There are other ways we could have written it, defining named callback functions, for example. Either way, it's pretty easy to write. But for an outside reader - or you returning to your own code later - it's difficult to follow and far worse to edit or rearrange. And it's just a simple example. In practice, a full async stack means one path through your code has dozens of calls and callbacks littered across all kinds of unnatural functions you were forced to create. Inserting new calls and rearranging are cumbersome.

We learned this about 6 months in at OkCupid?. Our web services started out simple and elegant, like the example above, but the more developers added to them, the more absurd our code got. There were some dark days at OkCupid?. Once we started integrating more async code: a distributed cache, a pub/sub system, etc., our code got heinous.

(A note for more experienced devs: control-flow libraries helped us fire our code in parallel, but they wouldn't let us throw async calls into the middle of an existing function without hacking that function in half. Later in this page you'll see an example that's horrible with such libraries.)

But back to our example. Worse than ugliness, we've made a programming mistake. All of those calls are made in serial. getNextMatch, getScore, and recordVisit are all contacting different servers, so they should be fired in parallel. So...how does Tame solve this?

var res1, res2; await { doOneThing(defer(res1)); andAnother(defer(res2)); } thenDoSomethingWith(res1, res2); "

-- http://tamejs.org/


"

Defining New Types

The real innovation in Qi is the use of sequent notation to define types. This is an enormously powerful means of defining types which gives Qi capacities beyond ML or even more recent languages such as Haskell. The use of sequent notation derives from Gerhard Gentzen who developed the sequent calculus treatment of first-order logic. In Gerhard's system a sequent is a pair composed of two lists of logical formulae called the antecedent and the succeedent respectively. The sequent is valid if the conjunction of the elements of the antecedent logically implies the disjunction of the elements of the consequent. The concept of logical implication is defined by laying down a series of sequent rules that say how sequents may be proved.

If the succeedent is restricted by only allowing one element in the list, then the system is a single-conclusion sequent calculus system; which is the kind that Qi is designed to represent. Qi requires that user-defined types be defined in terms of sequent rules. In other words, to define a type in Qi is to say how objects can be proved to belong to that type.

For instance, suppose we wish to define a type binary composed of all non-empty lists of 0s and 1s. Using sequent notation we would write:

if X e {0,1} X : zero-or-one;

This says that for any X, any sequent whose conclusion is X : zero-or-one (meaning X is of the type zero-or-one) is provable if the side-condition is satisfied (namely X is an element of the set {0, 1}). In Qi we writeif

(element? X [0 1]) : zero-or-one;

We state the simplest condition for being a binary number. Next we have two rules that say how bigger binary numbers are built out of smaller ones.

X : zero-or-one; [X] : binary;

\ binary_right \ X : zero-or-one; Y : binary; [X

Y] : binary;

\ binary_left \ X : zero-or-one, [Y

[X Y
Z] : binary >> P;
Z] : binary >> P;

This says that if we have a list of at least two elements X and Y (followed by 0 or more elements Z), which is assumed to be binary, then we can replace this assumption by the assumption that [Y

Z] is binary and X : zero-or-one. We need both rules because the first rule says how to prove conclusions of the form .... : binary and the second tells us what we can infer from assumptions of the form .... : binary.

Here is a script showing the definition of the datatype binary as a non-empty list of 1s and 0s.The function complement calculates the complement of a binary number. Notice there are no special constructor functions for binary numbers - we just use lists of zeros and ones in a natural way. But this means that there is no obvious way of telling if [1 1 1 0] is intended to be a list of binary numbers or a list of numbers. To avoid headaches and combinatorial problems in typechecking, Qi insists on the explicit typing of functions.

" -- http://www.lambdassociates.org/qilisp.htm

---

http://langnostic.blogspot.com/2010/09/yegge-strikes-back-from-grave.html


java's FFI is called JNI (java native interface): http://en.wikipedia.org/wiki/Java_Native_Interface


this blog post claims that haskell obsoletes lisp macros, b/c you only need macros to avoid evaluating arguments, which haskell does by default (laziness):

" Macros

Another beauty of Lisp is its macro facility. I’ve not seen its like in any other language. Because the forms of code and data are equivalent, Lisps macro are not just text substitution, they allow you to modify code structure at compile-time. It’s like having a compiler construction kit as part of the core language, using types and routines identical to what you use in the runtime environment. Compare this to a language like C++, where, despite the power of its template meta-language, it employs such a radically different set of tools from the core language that even seasoned C++ programmers often have little hope of understanding it.

But why is all this necessary? Why do I need to be able to perform compile-time substitutions with a macro, when I can do the same things at runtime with a function? It comes down to evaluation: Before a function is called in Lisp, each of its arguments must be evaluated to yield a concrete value. In fact, it requires that they be evaluated in order1 before the function is ever called.

Say I wanted to write a function called doif, which evaluates its second argument only if the first argument evaluates to true. In Lisp this requires a macro, because an ordinary function call would evaluate that argument in either case:

(defun doif (x y) (if x y)) ; WRONG: both x and y have been evaluated already (defmacro doif (x y) `(if ,x ,y)) ; Right: y is only evaluated if x is true

What about Haskell? Does it have a super-cool macro system too? It turns out it doesn’t need to. In fact, much of the coolness of Haskell is that you get so many things for free, as a result of its design. The lack of needing macros is one of those:

doif x y = if x then (Just y) else Nothing

Because Haskell never evaluates anything unless you use it, there’s no need to distinguish between macros and functions. " -- http://newartisans.com/2009/03/hello-haskell-goodbye-lisp/

is it true that that's all that macros are for? it squares with this:

" 3. Purpose: To control evaluation of the arguments.

Since macros are so much harder to use than functions, a good rule of thumb is: don't use defmacro if defun will work fine. So, for example, there would be no reason to try to use a macro for Square: a function would be much easier to write and test. In Lisp, unlike in C, there is no need to use macros to avoid the very small runtime overhead of a function call: there is a separate method for that (the "inline" proclamation) that lets you do this without switching to a different syntax. What macros can do that functions cannot is to control when the arguments get evaluated. Functions evaluate all of their arguments before entering the body of the function. Macros don't evaluate any of their arguments at preprocessor time unless you tell it to, so it can expand into code that might not evaluate all of the arguments. For example, suppose that cond was in the language, but if wasn't, and you wanted to write a version of if using cond. " -- http://www.apl.jhu.edu/~hall/Lisp-Notes/Macros.html

from the comments of http://newartisans.com/2009/03/hello-haskell-goodbye-lisp/ :

harsha says: March 14, 2009 at 2:18 pm

Well, i like lisp(scheme) & haskell too. But note that the need for macros is only eliminated in some cases. In particular, typeclasses like monads & arrows have special notation which helps a lot in using them. If i am not wrong, i think there is no way for you to directly define something for your own custom typeclass, what the do notation does for monads. So you still need macros, either via something like Template Haskell or Liskell.

Sam says: March 14, 2009 at 5:13 pm

I don’t think you really did CL-style macros justice. They can be used for a lot more than just changing the order that arguments are evaluated in — you can create whole new syntactic constructs at will.

For one thing, this means that CL doesn’t tend to ‘lag behind’ in terms of language design, since if another language ever introduces something innovative then you can easily ‘extend lisp’ with macros to add that functionality. There is no need to wait for an updated compiler.

The other thing is that it allows you to build languages tailored to solving the particular problem at hand. DSLs are cool :-)

Having said that, I have issues with lisps that macros just don’t make up for, and love Haskell more in any case :-p Reply John Wiegley says: March 14, 2009 at 6:13 pm

You’re indeed right, I couldn’t do CL justice in this regard. When I referred to being like a “compiler construction set”, I meant to imply a whole world of goodness. Being able to utilize the entire Lisp runtime at compile-time is something that just can’t be expressed in a few words like this. Reply Peter Seibel says: March 14, 2009 at 6:14 pm

I think you do a bit of a disservice to Lisp’s macros: the more interesting macros are not ones that simply delay evaluation of certain forms. More interesting is when a macro transforms something that doesn’t have any meaning into something that does. I give some examples of such macros in Practical Common Lisp (http://www.gigamonkeys.com/book/), in particular Chapter 24 on parsing binary files. Which is not to say that Haskell isn’t cool too. ;-) Reply John Wiegley says: March 14, 2009 at 6:40 pm

You’re so right about that, Peter. Lisp’s macros can be used to transform arbitrary syntax at compile-time into something legal, which allows for extreme freedoms of expression. You can even implement whole DSLs by macro alone — which is just what `LOOP` does, for instance.

So I take back my assertion that it’s essential purpose is to control evaluation, it’s truly a thing of beauty that other languages should take note of. Reply Sam says: March 14, 2009 at 7:22 pm

I would think that many other languages *have* taken note — the issue is that macros only really work in lisp because of the list based syntax. You can certainly do them in a languages with more ‘normal’ syntax (see Dylan and Nemerle, for example) but they’re far less pleasant to use.

There really isn’t a lot you can do about it, either, since it’s the trivial syntax of CL that makes CL macros so easy to use. I think we’ll eventually see reasonable macro systems for complex syntaxes, but AFAIK they haven’t arrived yet.

So, someone might say, why have complex grammars at all? They obviously aren’t *necessary*, since simple ones like those found in lisps are obviously usable, but by providing special syntax for common operations you can make the language more succinct and expressive. One of CL’s failings, in my opinion, is that although the syntax can more or less be adapted to work with anything, it’s still general and never gives you the optimal solution for anything. More specific syntaxes are less flexible, but usually far more expressive and succinct in their particular problem domain.

One day I hope to see a language which allows for specialised syntax, but still translates in into a clean AST which can be manipulated by macros at eval time. Maybe I should make a demo language… :-p

Daniel Weinreb says: March 15, 2009 at 8:21 pm

I’ve been using Lisp for 33 years, since I wrote system software for the Lisp Machine at MIT, and later as a co-founder of Symbolics. I’m using Common Lisp again now, as part of a big team writing a high-performance, highly-available, commercial airline reservation system, at ITA Software. Recently, I started learning Haskell. It’s fascinating and extremely impressive. It’s so different from the Lisp family that it’s extremely hard to see how they could converge. However, you can make a Lisp that is mostly-functional and gets many of the parallelism advantages you discuss. We now have one that I think is extremely promising, namely Rich Hickey’s Clojure.

If you want to program in Common Lisp, read Practical Common Lisp by Peter Seibel, without question the best book on learning Common Lisp ever written. For Haskell, I’ve been reading Real World Haskell by Bryan O’Sullivan? et. al. It’s excellent and I highly recommend it.

All of the comments that I was going to make have been made very well already, particularly about the power of Lisp macros for language extension and making domain-specific languages.

Sam, above, wonders whether we’ll see reasonable macro systems for complex syntax. I presume he means macro systems that can match the power of Lisp’s macros. There is some progress being made in this area. At the International Lisp Conference next week, there will be an invited talk called “Genuine, full-power, hygienic macro system for a language with syntax”. This is by David Moon, my long-time colleague, who among many other things was one of the designers of Dylan. He has been inventing a new programming language, roughly along the lines of Dylan in some ways, and he’ll be talking about it for the first time. I’m pretty sure he does not claim to have brought the full power of Lisp macros to an infix-syntax language, but I think we’ll find out that it’s another important step in that direction.

By the way, the conference also features a tutorial called “Clojure in Depth”, by Rich Hickey himself, running five hours (in three parts), “The Great Macro Debate” about the virtues and vices of Lisp macros, and all kinds of other great stuff. We’ve closed online registration but you can still register at the door. It’s at MIT (Cambridge MA). See ilc09.org.

Clojure’s being written in terms of the JVM has an extremely important advantage: it lets the Lisp programmer access a huge range of libraries. Although there are a lot more great Common Lisp libraries than most people know about (we’ll be addressing this!), there’s no way Common Lisp can ever keep up with all the specialized libraries being developed for the JVM.

There are also two huge implementation advantages: Clojure’s implementation can ride on the excellent JIT compilers and the excellent garbage collectors of the various JVM implementations (have you tried out JRockit?) rather than having to do this work all over again.

Because your post showed so much depth of understanding, I was very interested to hear how you felt about Clojure. I don’t understand it, though.

It’s always been unclear to me precisely what people mean by “scripts” and “scripting languages” The terms are used widely, but with very different meanings. For example, to some people, it seems that a “scripting language” is one with dynamic typing!

As far as I’m concerned, nobody has a broader and deeper knowledge of computer languages than Guy Steele. (I can back up that claim, if anyone wants me to.) So I asked him, and here’s what he said:

“By me, the term ‘scripting language’ is not intrinsic, but extrinsic: it describes the context and application for the language. That context is typically some large mechanism or collection of facilities or operations that may usefully be used one after another, or in combination with one another, to achieve some larger operation or effect. A scripting language provides the means to glue the individual operations together to make one big compound operation, which is typically carried out by an interpreter that simply ‘follows the script’ a step at a time. Typically scripting languages will need to provide at least sequencing, conditional choice, and repetition; perhaps also parallelism, abstraction, and naming. Anything beyond that is gravy, which is why you can put a rudimentary scripting language together quickly.”

Steele’s answer seems in line with John Hennessey’s explanation of what Tcl was meant for. The idea is that you have two languages. At the lower level, you have something like C: suitable for writing programs that are very fast and work with the operating system, but hard to use for anyone but a professional. At the higher level, you have something like Tcl, which is easy to learn and use and very flexible, and which can easily invoke functionality at the lower level. The higher level acts as “glue” for the lower level. Another example like this is Visual Basic, and the way that you can write C programs that fit into VB’s framework.

In my own opinion, this kind of dichotomy isn’t needed in Lisp, where the same language is perfectly suitable for both levels. Common Lisp, as it is used in practice, is not so dynamic that it cannot be compiled into excellent code, but is easy to write for the kind of simple purposes to which Tcl is typically put. (Particularly for inexperienced programmers who are not already wedded to a C/C++/Java-style surface syntax.)

In your own case, you mention “tiny” and “fast-running” executables. I am not sure why “tiny” matters these days: disk space is very cheap, and the byte code used by the JVM is compact. Common Lisp programs compiled with one of the major implementations, and programs written for the Java Virtual Machine, execute at very high speed.

The fact that you distinguish between server-side and client-side applications suggests to me that what you’re really talking about is start-up latency: you’re saying that a very small program written for the JVM nevertheless has a significant fixed overhead that causes perceived latency to the user. Is that what you have in mind?

The last time this question came up, I did my own very quick and dirty test. I tried running a simple program in Clozure Common Lisp, from the command line, and I saw about 40ms of start-up latency on a not-particularly-fast desktop running an old Linux release. A trivial Python program took about 7ms. That’s better, but 40ms is not very noticeable. (I suppose if you’re writing a long command line piping together many “scripts” or running them in a loop, it would start to add up.)

As a hypothetical question just to clarify your meaning: if there were a JVM implementation that started up instantly, so that the speed of execution of a small program would be the same as the speed of the same code appearing in the middle of a long-running server process, would that answer your objections? Reply

---

http://docs.racket-lang.org/ts-guide/

---

coffeescript vs. javascript

---

" However, my understanding of the ! convention was not complete and it often left me questioning the use of the symbol in other methods on other objects. Things just didn’t quite match up to my understanding. For example, in Rails and ActiveModel?, `save!` doesn’t necessarily change anything in the in-memory object. It saves changes to the underlying data store and throws exceptions if any validation or other errors have occurred. The non-! version does the same work, but instead of throwing exceptions, it stores messages in the .errors collection. This was very puzzling to me. Why would they use a ! method name for something that doesn’t always change the object?

Adjusting My Understanding

Page 13 of the Eloquent Ruby book explains the ! convention in a manner that encapsulates my previous understanding and at the same time, expands what the convention covers so that methods like save! no longer seem odd to me. Here’s what Russ has to say:

    Ruby programmers reserve ! to adorn the names of methods that do something unexpected, or perhaps a bit dangerous."

nice convention but we still need a mutute language element too

also, using all? and all! etc for semantically different methods makes things hard to search for in Google

(btw, ruby's all? is http://www.ruby-doc.org/core/classes/Enumerable.html#M001499 )

--- Code Blocks vs Hashes

This is one that Hugo Bonacci mentioned on twitter a few days ago. He had an issue related to code blocks vs hashes:

Screen shot 2011 06 01 at 1 55 03 PM

The reason why this happens had never occurred to me before, but I have run into this problem in the same way he did on many occasions: code blocks and hashes can both use the { } curly brace to denote their beginning and end. 1 some_data = { :foo => "bar", :baz => "widget"} 2 3 [1..3].each {

iputs i }

If you have a method that wants a hash as the parameter and you want to specify that hash in-line with the method call, the following will fail: 1 def something(foo) 2 foo.each {

3 end 4 5 something { :foo => "bar" }
k, vputs "#{k}: #{v}" }

Ruby will interpret this as a code block even though the developer intends it to be a hash, and it will crash:

SyntaxError?: (irb):4: syntax error, unexpected tASSOC, expecting '}' something { :foo => "bar" }

Fortunately, the solution is simple, again. You can either omit the curly braces or wrap the method call with parenthesis: 1 def something(foo) 2 foo.each {

3 end 4 5 something :foo => "bar" 6 something({:foo => "bar"})
k, vputs "#{k}: #{v}" }

I prefer to eliminate the curly braces, just to reduce the syntax noise of the method call.


http://www.tutorialspoint.com/ruby/ruby_loops.htm

while, while modifier, until, until modifier, for. break, next, redo, retry.

--- alternative hash syntaxes in ruby?

ruby-1.9.2-p290 :001 > {name: 'testuser1@pietrust.com', iscategory: false, issystem: false, isuser: true} => {:name=>"testuser1@pietrust.com", :iscategory=>false, :issystem=>false, :isuser=>true}


cheap syntactic sugar for ignoring exceptions of a certain type note above, an important special case is converting a KeyError? to returning nil

---

http://www.robertsosinski.com/2008/12/21/understanding-ruby-blocks-procs-and-lambdas/

" def proc_return Proc.new { return "Proc.new"}.call return "proc_return method finished" end

def lambda_return lambda { return "lambda" }.call return "lambda_return method finished" end

puts proc_return puts lambda_return

  1. => Proc.new
  2. => lambda_return method finished

In proc_return, our method hits a return keyword, stops processing the rest of the method and returns the string Proc.new. On the other hand, our lambda_return method hits our lambda, which returns the string lambda, keeps going and hits the next return and outputs lambda_return method finished. Why the difference?

The answer is in the conceptual differences between procedures and methods. Procs in Ruby are drop in code snippets, not methods. Because of this, the Proc return is the proc_return method’s return, and acts accordingly. Lambdas however act just like methods, as they check the number of arguments and do not override the calling methods return. For this reason, it is best to think of lambdas as another way to write methods, an anonymous way at that. "


an example of something interesting in Ruby:

    @evals.all?(&:save)
is "symbol", i.e. quote; without that, it would try to execute the "save" immediately, in the current scope

& is "to_proc"

all? is "return true iff my argument block (i.e. proc) is true for every element of the array)

what this does is call the "save" method on every element of the @evals array, and see if they all return True. Note that "save" is executed (and located) in the scope of the objects in @evals, not in the current cope.

<<<<<<< /root/website/lists/.unison.merge1-programmingConstructs2.txt



----

************************************************************
************************************************************
************************************************************
********** IMPORTANT ***************************************
************************************************************

vectorized indexing is surprisingly hard to do in ruby:

l = [1,2,3]
l.values_at(*[0,2])
 => [1, 3] 


so mb Python is better for matrix algebra..  a[b]'s natural intepretation is what Python does

this can't be fixed in Ruby b/c the notation a[3,6] is already defined in ruby as a certain kind of slicing
||||||| /root/.unison/backup/website/lists/.bak.1.programmingConstructs2.txt
=======

   
----

************************************************************
************************************************************
************************************************************
********** IMPORTANT ***************************************
************************************************************

vectorized indexing is surprisingly hard to do in ruby:

l = [1,2,3]
l.values_at(*[0,2])
 => [1, 3] 


so mb Python is better for matrix algebra..  a[b]'s natural intepretation is what Python does

this can't be fixed in Ruby b/c the notation a[3,6] is already defined in ruby as a certain kind of slicing over ranges

  [1, 2, 3, 4, 5][1, 3] # => [2, 3, 4]


----

it's ugly, and interferes with chaining, how Python has functions and methods. but it's ugly in Ruby how you have to inject functions into classes in order to make them available (actually i dont think you have to.... currently i'm trying to define a new delete_indices method for arrays). maybe the vaunted simula2, or was it ocaml, module system could help ruby keep module namespaces separate?


>>>>>>> /root/website/lists/.unison.merge2-programmingConstructs2.txt

it's irritating how ruby has all these methods that unnecessarily take blocks. i'd like to compose each_with_index with reverse, but each_with_index doesn't just return a list of tuples, it also maps them. it would be better to have a separate fn and then compose it with map (with cheap syntactic sugar for mapping).


http://blog.fogus.me/2011/08/14/perlis-languages/ lists

joy, eiffel, qi, clojure, kernel, oz, frink, apl, haskell, Squeak, Scheme, Common Lisp, Prolog

among others


http://stackoverflow.com/questions/4603349/python-design-mistakes/4603615#4603615 points out that

http://docs.python.org/release/3.0.1/whatsnew/3.0.html

can serve as a way to learn from Python's mistakes


stackless python


http://python-history.blogspot.com/

http://twit.tv/floss11

" My biggest peeve with Python - and one which was not really addressed in the move to 3.x - is the lack of proper naming conventions in the standard library.

Why, for example, does the datetime module contain a class itself called datetime? (To say nothing of why we have separate datetime and time modules, but also a datetime.time class!) Why is datetime.datetime in lower case, but decimal.Decimal is upper case? And please, tell me why we have that terrible mess under the xml namespace: xml.sax, but xml.etree.ElementTree? - what is going on there? "

---

java has checked and unchecked exceptions

--

http://stackoverflow.com/questions/1011431/common-pitfalls-in-python/1011941#1011941

---

in java, convention for specifying ranges is half-open: beginning inclusive, end exclusive (note: this is a rumor, i haven't checked)


good API principals:

" II. General Principles

        API Should Do One Thing and Do it Well
        API Should Be As Small As Possible But No Smaller
        Implementation Should Not Impact API
        Minimize Accessibility of Everything
        Names Matter–API is a Little Language
        Documentation Matters
        Document Religiously
        Consider Performance Consequences of API Design Decisions
        Effects of API Design Decisions on Performance are Real and Permanent
        API Must Coexist Peacefully with Platform
    III. Class Design
        Minimize Mutability
        Subclass Only Where it Makes Sense
        Design and Document for Inheritance or Else Prohibit it
    IV. Method Design
        Don't Make the Client Do Anything the Module Could Do
        Don't Violate the Principle of Least Astonishment
        Fail Fast - Report Errors as Soon as Possible After They Occur
        Provide Programmatic Access to All Data Available in String Form
        Overload With Care
        Use Appropriate Parameter and Return Types
        Use Consistent Parameter Ordering Across Methods
        Avoid Long Parameter Lists
        Avoid Return Values that Demand Exceptional Processing

"

"


toread: http://blog.objectmentor.com/articles/2009/02/26/10-papers-every-programmer-should-read-at-least-twice , http://news.ycombinator.com/item?id=2922108


Programming Language Checklist by Colin McMillen?, Jason Reed, and Elly Jones.

You appear to be advocating a new: [ ] functional [ ] imperative [ ] object-oriented [ ] procedural [ ] stack-based [ ] "multi-paradigm" [ ] lazy [ ] eager [ ] statically-typed [ ] dynamically-typed [ ] pure [ ] impure [ ] non-hygienic [ ] visual [ ] beginner-friendly [ ] non-programmer-friendly [ ] completely incomprehensible programming language. Your language will not work. Here is why it will not work.

You appear to believe that: [ ] Syntax is what makes programming difficult [ ] Garbage collection is free [ ] Computers have infinite memory [ ] Nobody really needs: [ ] concurrency [ ] a REPL [ ] debugger support [ ] IDE support [ ] I/O [ ] to interact with code not written in your language [ ] The entire world speaks 7-bit ASCII [ ] Scaling up to large software projects will be easy [ ] Convincing programmers to adopt a new language will be easy [ ] Convincing programmers to adopt a language-specific IDE will be easy [ ] Programmers love writing lots of boilerplate [ ] Specifying behaviors as "undefined" means that programmers won't rely on them [ ] "Spooky action at a distance" makes programming more fun

Unfortunately, your language (has/lacks): [ ] comprehensible syntax [ ] semicolons [ ] significant whitespace [ ] macros [ ] implicit type conversion [ ] explicit casting [ ] type inference [ ] goto [ ] exceptions [ ] closures [ ] tail recursion [ ] coroutines [ ] reflection [ ] subtyping [ ] multiple inheritance [ ] operator overloading [ ] algebraic datatypes [ ] recursive types [ ] polymorphic types [ ] covariant array typing [ ] monads [ ] dependent types [ ] infix operators [ ] nested comments [ ] multi-line strings [ ] regexes [ ] call-by-value [ ] call-by-name [ ] call-by-reference [ ] call-cc

The following philosophical objections apply: [ ] Programmers should not need to understand category theory to write "Hello, World!" [ ] Programmers should not develop RSI from writing "Hello, World!" [ ] The most significant program written in your language is its own compiler [ ] The most significant program written in your language isn't even its own compiler [ ] No language spec [ ] "The implementation is the spec" [ ] The implementation is closed-source [ ] covered by patents [ ] not owned by you [ ] Your type system is unsound [ ] Your language cannot be unambiguously parsed [ ] a proof of same is attached [ ] invoking this proof crashes the compiler [ ] The name of your language makes it impossible to find on Google [ ] Interpreted languages will never be as fast as C [ ] Compiled languages will never be "extensible" [ ] Writing a compiler that understands English is AI-complete [ ] Your language relies on an optimization which has never been shown possible [ ] There are less than 100 programmers on Earth smart enough to use your language [ ] ____________________________ takes exponential time [ ] ____________________________ is known to be undecidable

Your implementation has the following flaws: [ ] CPUs do not work that way [ ] RAM does not work that way [ ] VMs do not work that way [ ] Compilers do not work that way [ ] Compilers cannot work that way [ ] Shift-reduce conflicts in parsing seem to be resolved using rand() [ ] You require the compiler to be present at runtime [ ] You require the language runtime to be present at compile-time [ ] Your compiler errors are completely inscrutable [ ] Dangerous behavior is only a warning [ ] The compiler crashes if you look at it funny [ ] The VM crashes if you look at it funny [ ] You don't seem to understand basic optimization techniques [ ] You don't seem to understand basic systems programming [ ] You don't seem to understand pointers [ ] You don't seem to understand functions

Additionally, your marketing has the following problems: [ ] Unsupported claims of increased productivity [ ] Unsupported claims of greater "ease of use" [ ] Obviously rigged benchmarks [ ] Graphics, simulation, or crypto benchmarks where your code just calls handwritten assembly through your FFI [ ] String-processing benchmarks where you just call PCRE [ ] Matrix-math benchmarks where you just call BLAS [ ] Noone really believes that your language is faster than: [ ] assembly [ ] C [ ] FORTRAN [ ] Java [ ] Ruby [ ] Prolog [ ] Rejection of orthodox programming-language theory without justification [ ] Rejection of orthodox systems programming without justification [ ] Rejection of orthodox algorithmic theory without justification [ ] Rejection of basic computer science without justification

Taking the wider ecosystem into account, I would like to note that: [ ] Your complex sample code would be one line in: _______________________ [ ] We already have an unsafe imperative language [ ] We already have a safe imperative OO language [ ] We already have a safe statically-typed eager functional language [ ] You have reinvented Lisp but worse [ ] You have reinvented Javascript but worse [ ] You have reinvented Java but worse [ ] You have reinvented C++ but worse [ ] You have reinvented PHP but worse [ ] You have reinvented PHP better, but that's still no justification [ ] You have reinvented Brainfuck but non-ironically

In conclusion, this is what I think of you: [ ] You have some interesting ideas, but this won't fly. [ ] This is a bad language, and you should feel bad for inventing it. [ ] Programming in this language is an adequate punishment for inventing it.

-- http://colinm.org/language_checklist.html

--- havent watched this yet: http://www.infoq.com/presentations/Simple-Made-Easy

http://blog.fogus.me/2011/10/18/programming-language-development-the-past-5-years/

http://www.jetbrains.com/mps/docs/tutorial.html meta programming system


pandas has something like an R dataframe for Python. it should be n-dim, tho.

Series.shift (shift series or time series) is interesting


python dict but also bidirectional dict and defaultdict

---

    bag
    A bag, or multiset, is a collection that has no order and no limit on unique elements. 
    setlist
    A setlist is an ordered collection of unique elements, combining both set and list. 
           AKA unique list, ordered set, bijection int->hashable, and ordered collection of unique elements -- http://code.google.com/p/python-data-structures/wiki/CollectionsExtendedProposal

IndexedList?: like this except the index should return a list of ALL places the item is found, in case the same item was added multiple times:

class IndexedList?(list): #index tells you where in the list you can find the value in sub-linear time #items must be hashable #don't mutate outside of __init__ and .append unless you keep the .index_dict up to date

    def __init__(self, *args, **kwargs):
        super(IndexedList,self).__init__(*args, **kwargs)
        self.index_dict = {}
        self.index = self.index_dict.__getitem__
        for i in range(len(self)):
            self.index_dict[self[i]] = i
    def append(self, item):
        super(IndexedList,self).append(item)
        self.index_dict[item] = len(self)-1

in general, collections should have variant that are/are not mutable, unique, ordered, indexed. And perhaps also various performance notes like, is append fast? is lookup fast? is insert fast?


scalar context so that you can opt not to vectorize in a uniform way when you pass a list to a fn that automatically vectorizes


this is annoying:

     ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

should default to .any()


even Python can do this:

((a,c),b) = ((1,3),2)


pandas dataframe is optimized for the case in which the index is both HOMOGENOUS and IMMUTABLE. dtypes are per-COLUMN, and you can append but it's a pain and is computationall inefficient (i believe that a whole new dataframe is created)


{4,3,2} is Python set constructor!


hard problems in language design:


language extensibility mechanisms:

toread:

Modern Extensible Languages - Computing and Software ... www.cas.mcmaster.ca/sqrl/papers/SQRLreport47.pdf

Modern Extensible Languages www.danielzingaro.com/extensible.pdf


toread:

http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/

http://blog.ircmaxell.com/2012/04/php-sucks-but-i-like-it.html


http://satyajit.ranjeev.in/2012/05/17/python-a-few-things-to-remember.html

---

http://en.wikipedia.org/wiki/Pure_type_system

---

whaa? Pierce's law == call/cc under Curry-Howard? http://en.wikipedia.org/wiki/Peirce%27s_law ... Pierce's law is based on the notion that X -> Y is vacously true if X

---

if a -> b means there exists an edge from a to be, then a -*> b is sometimes used to denote that there is a path from a to b.

---

http://www.cs.jhu.edu/~scott/pl/lectures/types.html

---

http://devopsanywhere.blogspot.com/2011/09/how-ruby-is-beating-python-in-battle.html

 How Ruby is beating Python in the battle for the Soul of System Administration 

"

Here are the features in a scripting language that a sysadmin wants

    A DSL for the problem domain
    High productivity, i.e. concise and expressive syntax
    Easy to interaction with shell commands
    Regular Expressions
    powerful one-liners

"

" I don't use Ruby at all, personally. I use Python and Bash almost exclusively. I use Puppet but am not a huge fan, and I certainly have no plans to switch to ruby. I mostly agree with your original point.

However, the CLI thing is a legitimate advantage in favor of Ruby and Perl. Python is just really annoying to use for system administration one-liners. A number of fundamental design choices that have a minimal impact on even the smallest .py files make 'python -c' cumbersome.

Specifically:

A number of common tasks available as syntax in Perl and Ruby are in libraries in python. In particular, you often need to import sys, os, re, and subprocess. Not a real issue writing scripts, but adds a lot of overhead to a single line.

You can't pass a DEDENT token to '-c', at least I haven't figured it out, meaning you can't use more than one loop or control structure. You can work around this to some extent using list comprehensions.

Python's string literal syntax is more limited. I have never had an issue with this writing scripts, however from the command line it can be annoyingly tricky to keep track of which quote characters are needed. Ruby and Perl both have non-conflicting options for notating string literals. In shell scripts, strings are the default literal and you only need to worry about keywords and special characters.

Another advantage of shell are list literals. (I don't have enough practice to know how Ruby and Perl fare in this regard).

Bash also has some extremely convenient list expansion syntax.

    for fqdn in {www,news}.ycombinator.com ; do echo $fqdn ; done
    python -c "import sys; [sys.stdout.write('%s\n' % fqdn) for fqdn in ['%s.ycombinator.com' % sd for sd in ['www', 'news']]]""

"

     Last 10 lines in a file? 'tail -10 filename.txt'

A lot uglier, but you can do a one liner ...

     ruby -e 'lines=[]; while gets(); lines << $_; \
       lines = lines[-10,10] unless $. < 10; end; \
       puts lines' < filename.txt

The problem with using "the right tool for the right job" is that there are many tools and many jobs, and a general purpose language helps you get things done in case you're stuck. In general I use Ruby for doing string manipulation before piping to other commands.

Here's an (ugly) one liner that shows the biggest tables in a MySQL? database:

     mysql -u root godzilla -s -e 'show tables' \
     | ruby -ne '$_.strip!; puts %Q{SELECT count(*) \
       as cnt, "#$_" as tbl FROM #$_; }' \
     | mysql -u root godzilla -s \
     | sort -nr \
     | ruby -ne 'parts = $_.split /\s+/; \
       puts "%40s : %s" % parts.reverse' \
     | head -10

You can probably come up with something more clever, but I barely gave that one any thought.


jbert 316 days ago

link

> A lot uglier, but you can do a one liner ...

Yeah, but that one-liner reads the whole file. 'tail' can be more clever (and is). 'strace' output:

    open("a", O_RDONLY)                     = 3
    fstat(3, {st_mode=S_IFREG|0644, st_size=545975512, ...}) = 0
    lseek(3, 0, SEEK_CUR)                   = 0
    lseek(3, 0, SEEK_END)                   = 545975512
    lseek(3, 545972224, SEEK_SET)           = 545972224
    read(3, "tu 11.04 \\n \\l\n\nUbuntu 11.04 \\n "...,     3288) = 3288

'tail' is seeking to the end, then reading backwards.

The point being, these 'little tools' can contain important optimisations - they aren't necessarily equivalent to a naive reimplementation.


lloeki 316 days ago

link

Reminds me on that article on GNU grep doing some speedy magic: http://lists.freebsd.org/pipermail/freebsd-current/2010-Augu... "

"

nabb 316 days ago

link

tail can be done in ruby can be quite succinctly with the $< (ARGF) variable:

  ruby -e'$><<$<.map.last(10)'

For the rest of the ruby one-liners in the page the author references, most can be done more easily with standard command line tools (although most people aren't well-versed in sed, so 'ruby -pe puts' might be better than 'sed G').


viraptor 316 days ago

link

There's one big problem with code like that. If you're doing devops-y stuff, then you really, really don't want to debug something like that when your pager goes off at 3am. The cleaner the solution the better, because after you wake up `$><<$<` is just a blurred thing with no meaning...


davnola 316 days ago

link

Cool. That hurts my eyes.

    ruby -e "print ARGF.readlines.last(10)"

rytis 316 days ago

link

on, say, a 5GB log file?..

If I read this correctly (http://www.ruby-doc.org/core/classes/IO.html#M000914) it'll read the whole file into an array, then spit out last 10 entries?


davnola 316 days ago

link

Yes it does - good point, but the sigil-tastic snippet in the parent reads the whole file into an array, too - it's just harder to tell.

Plus the parent snippet is not Ruby 1.9 compatible.

Anyways, my version was just to illustrate that sysadmin scripting in Ruby does not have illegible.


prolepunk 316 days ago

link

Not that you can't do this in python:

python -c "print '\n'.join(open('LICENSE.txt','r').read().split('\n'))"

But this is besides the point.


jodrellblank 316 days ago

link

Your split and join seem redundnant, and open defaults to read-only.

  print open('whatever.file').readlines()[:-10]

But still, don't do that on any log too big to read into memory.


dramaticus3 316 days ago

link

tac filename.txt

"
sed 10q tac

"

antoncohen 316 days ago

link

The short/one-liner examples are more likely to be done from command-line shell, where they would be much easier:

  head -10 /path/to/file
  dmidecode | grep -iq vmware && echo "is vmware" || echo "is not vmware"

The list of projects written in each language has more to do with what else you use than the languages, e.g., if you are working with Ruby web apps you may use Rake and Capistrano. SCons, Mercurial, Bazaar, and YUM are all written in Python.

"

"

glenjamin 316 days ago

link

This essentially reduces to "it's easier to write a fluent, readable, flexible and powerful DSL in Ruby than it is in Python.

Sysadmin tools benefit greatly from using such a DSL as their interface vs custom parsers or configuration files.

Hence, Ruby is a good choice for these sorts of things. "

---

http://patriciopalladino.com/blog/2012/08/09/non-alphanumeric-javascript.html

---

js has two ==s, == and ===. == coerces type, === does not. sounds like people only like the non-type-coercing one, see http://stackoverflow.com/questions/359494/javascript-vs-does-it-matter-which-equal-operator-i-use

--

js bind: bind 'this' inside a function to the given object

---

http://ricardo.cc/2011/06/02/10-CoffeeScript-One-Liners-to-Impress-Your-Friends.html

--

coffescript .? : i'm not sure what it does, but based on an example in https://tech.dropbox.com/?p=361 , it looks like a.?b means "a and a.b", that is, if a is null then it returns false (or null?), but if not, it gives you a.b.

--

"

Problems with optional parenthesis

Take a look at these two snippets. Next to the CoffeeScript? code is the resulting JavaScript?:

doSomething () -> 'hello'

doSomething(function() { return 'hello'; });


doSomething() -> 'hello'

doSomething()(function() { return 'hello'; }); "

-- http://ceronman.com/2012/09/17/coffeescript-less-typing-bad-readability/


http://tech.t9i.in/2013/01/why-program-in-go/

"It combines the elegance of Python with the performance of C and C++. Even if it is only 90% as easy as Python and 90% as fast as C or C++, it still works out to be a winning combination. "

" One of the books I read was this book http://www.amazon.com/Concepts-Techniques-Models-Computer-Programming/dp/0262220695, which offers a kind of gestational walk-through of programming language features using an academic programming language called Oz. Somewhere in its discussion, the book introduces a concept of "dataflow programming" and "declarative concurrency" http://en.wikipedia.org/wiki/Oz_(programming_language)#Dataflow_variables_and_declarative_concurrency. Now, you might want to follow the preceding link and briefly acquaint yourself with this concept because it is the centre-piece of Go's language features.

When I learned about dataflow programming, I recalled how big a deal it was at Yahoo to be able to fetch data from multiple data sources in parallel and have resolution policies like fastest-source-wins, or wait-until-N-of-M-responses, etc., each with its own timeout handling and so on. The standard solution was a Java daemon with XML documents to describe the intended dataflow. It wasn't pretty. I wished for this capability to exist in a language that, unlike Oz, wasn't academic.

I did not realise at that time that Go was what I was looking for. "

" (abbreviated:)

" (abbreviated:)

and another author says: "

"

---

Kotlin's 'when' is a better 'switch':

when (obj) { is String => obj[0] is Int => obj + 1 !is Boolean => null else => 0 }

when (x) { 0 => "Zero" 1, 2, 3 => "1, 2 or 3" x + 1 => "Really strange" in 10..100 => "In range" !in 100.1000 => "Out of range" }

---

Kotlin uses the word 'trait' to mean something like 'stateless class'

---

goroutines (green threads)

---

inter-process channels

---

they say that Go has error objects but not exceptions, what exactly does that mean?


todo read http://news.ycombinator.com/item?id=4721550


" y. There are no standard language features for very basic things like namespaces and modules, also what it provides for OOP is so confusing that it makes people want to roll their own OO system from scratch. Objects as dictionaries and first-class functions make it possible to implement those features in hundreds of ways and almost every JavaScript? program tends at least in some places to use some own, original solution for one of the very basic problems of structuring programs, because it is so easy to built one. It is good that this is possible, it is bad that people _have to_ do this (or choose one of the countless existing solutions) before they can actually write the program they want to write.

A lot of the problems would go away if the following was part of the language and present in every browser:

Then, at least the basic structure of the code would be consistent across programs from various people, and I wouldn't have to work out the details of yet another OO system, yet another way of just laying out the code across functions/objects each time I want to read another JavaScript? program or yet another way of doing function () {}.bind that works across the browsers (here the problem is also the long time it takes for the majority of people to install a browser new enough to adopt the revised standards). And it's not only about reading code, how do you write an object-oriented refactoring tool if every program realizes OO in its own distinct way?

The situation is probably slightly better with server side JS where you are free to adapt the newest version of the language/runtime and also the server-side frameworks at least to some extend tend to encourage to use a common structure for all the modules.


garindra 218 days ago

link

Your 4 points are exactly why I always use Require.js, Backbone, and Underscore in all my large projects.

Require.js solves point #1 and point #3. Backbone solves point #2. Underscore solves, well, point #4.

In my experience, the other important thing to maintain coherence and sanity while creating large JS apps is to have a system that makes dependencies between modules very, very clear. Require.js basically also does that for me; it requires every module to define their dependency on other modules on top of its file.

I highly recommend people creating large JS apps to at least check it out : http://requirejs.org/ "

"

> Type inference is a huge productivity win.

When writing code, sure. When reading and/or maintaining code, not so much. Especially when it stretches across function or method boundaries. Languages like C++ and C# have the right happy medium in my opinion: `auto`/`var` keywords for type-inferenced local variables, but explicit parameters and return types for all methods.

"


" The key difference between (Actor and CSP) models is that CSP introduces the notion of a channel. Processes don't communicate with each other directly; instead, they send events (messages) to channels, and receive events from channels. A channel is like a pipe. In the CSP model, message delivery is instantaneous and synchronous. " -- http://www.informit.com/articles/article.aspx?p=1768317

in Go, channels can be synchronous, or they can be buffered

" Selective receiving of messages was not a work around to the lack of channels, it is an integral part of handling messages. It also allows for more control in which messages a process wishes to receive, not just selecting on which process sent the message, you can select messages by structure and values as well. Or a combination of all of them. "

"If you are going to discuss Erlang's concurrency model then you must include the process error handling which is a fundamental part of the concurrency model. They have been designed to work together and not discussing together is missing half the point. "


Erlang process error handling: not sure yet what the fuss is about. Seems like Erlang makes it easy to chain processes together so that either (a) one process crashes when another one crashes, or (b) one process gets a message in its mailbox when another one crashes. The topology is a network, not a chain, so one crashing process can signal multiple other processes when it crashes (exits).

http://www.erlang.org/doc/reference_manual/processes.html#id82744


Lisp's cond:

(cond (condition1 expression1) (condition2 expression2) (condition3 expression3) )

i think it goes thru until it finds a condition that matches, if any (instead of 'else', you can make condition3 = true ('t', in Lisp terms)


continuations


STM (software transactional memory) (clojure's 'dosync' blocks)


Lisp's ability to basically add new 'block-level' language features, presumably via macros, e.g. Clojure's dosync blocks


laziness


subtext: decision tables instead of conditionals (visualization of DNF). Combines with function expressions to create "schematic tables" which are apparently a complete programming language. see fig. 13 vs fig 14. of http://subtextual.org/OOPSLA07.pdf

Haskell damage By Jonathan Edwards

Published: April 26, 2010

Per Vognsen suggests a Haskell version of the damage function from my Schematic Tables talk:

data Attack = Magic

hit surprise defense attack = damage where effectiveness = power * (if surprise then 3 else 2) (power, damage) = case attack of Magic -> (5, max 0 (effectiveness - defense)) Melee -> (4, 2 * effectiveness / defense)
Melee

This passes pairs of values (power, damage) through conditionals. That allows the conditionals to multiplex more than one value, rather than being repeated. I suspect I could stymie that approach with more complex data flows, but I am happy to acknowledge that Haskell does OK on this example. Thanks, Per. Posted in General

Comments closed

http://alarmingdevelopment.org/?p=358#more-358

"

Typed Subtext By Jonathan Edwards

Published: March 1, 2010

The last several months I have been trying to make coherence deterministic, using what PL theoreticians call a type and effect system. The effect system is telling me to return to the tree data model of Subtext (which I had set aside as too complex), but with static types. Therefore I am redefining Subtext as a statically typed language, with classes and inheritance and so on. I have sketched this out, and it seems to magically resolve many of the complexities that were stumping me before.

Why didn’t I see this before? Maybe I drank too much of the Self kool-aid. Subtext has always been a prototypical language like Self. That is an elegant idea, but static classes really are a handy way to impose structure on the chaos. That structure is needed to guarantee deterministic parallelism, and it also comes in handy in a lot of other places where you want to impose constraints.

My next step is a precise description of typed Subtext, focusing on the data model. I may start off with a normal imperative semantics and add coherence in the subsequent step. I am going to use a textual syntax, since that is what people need and expect. Subtextuality manifests in that the syntax will require an IDE to be easily editable, since it is actually just a pretty-printing of an underlying code meta-model. "


persistance


" It is reasonable to engineer a great deal more robustness into our languages. Garbage collection, persistence, automatic code-distribution for redundancy and scalability, object capability model and typing for security and safety, support for timing models to better synchronize effectors (sound, video, robotic motion) without a lot of hacks, automated scheduling based on dependencies in order to avoid update conflicts, etc." http://alarmingdevelopment.org/?p=392#comment-58418

---

http://www.scala-lang.org/node/8610

" Since you were gracious enough to reply to me, let me be bold enough to offer a suggestion. The impression that Scala is too complicated for average programmers may be a big obstacle. As you said in a recent post, Scala can be subsetted for different levels of sophistication. A frequent complaint about C++ was that while everyone agreed on the need to subset the language, no one agreed on what that subset should be. Perhaps you should define an official language subset for application programming, and enforce it on a package basis.

... Jonathan, I agree with your suggestion for defining subsets that correspond to different levels of sophistication. We have taken a first stab at this. See:

    http://www.scala-lang.org/node/8610
    We are thinking of having tool support for enforcing some of this, but have not yet started working on it in earnest.
        Vincent
        Posted February 23, 2011 at 2:09 pm | Permalink
        How difficult would it be to have the compiler enforce the different levels according to a compiler option?
        I’m the author of a Sip Server written in Scala, and so far I haven’t even ventured into “L2″ or “L3″.
        I’d sleep easier at night knowing that my build scripts could enforce that no one checks in a module with some unnecessary complexity best reserved for core library designers or seriously complex java interop. "

--- a way to write a self-recursive anonymous function


tail call optimization

---

note that multiple return argument syntax can be mimic by passing by reference

---

Algol-68's unified loop syntax:

for i from 0 to 10 by 1 while condition do ~ od

(see http://cowlark.com/2009-11-15-go/ )

" ...and all the modifier keywords are optional. Remove them all and you get an infinite loop, as shown above. Some odd combinations are possible, and useful...

  1. keep incrementing i from 0 indefinitely, until the flag is set # for i from 0 by 1 while not cancelled do ~ od
  2. run loop 10 times, but don't bother with a counter to 10 do ~ od

Brand X is definitely clearer and more expressive than Go here. In particular, I don't like Go's use of for for while loops... the clue is in the name!

But wait! There's more. Remember how I said that Brand X allowed ; in expressions? This is perfectly valid:

while i := calculate_i; print(i); i < 0 do dosomething od

In fact, the scoping is such that you can declare variables inside these expressions, leading to the very convenient idiom:

if int errorcode = dosomething; errorcode /= 0 then reporterror(errorcode) fi

(Brand X's /= operator is equivalent to Go and C's != operator.)

You can do the reverse, too, embedded if statements into expressions:

print(if (c > 0) then "X is positive" else "X is not positive" fi)

...and because this is messy, there's an alternative syntax for this:

print((c > 0

"
"X is positive""X is not positive"))

---

" I think Python is especially hard to get right in the presence of Unicode strings, though I've read rumors that 3k helps. The Go language defines its string type as UTF-8 and provides a separate type for collections of bytes, but even at this early stage you see them getting it wrong (e.g. the textproto package provides strings but doesn't discuss encoding anywhere). "

---

" The web. Half the security problems you see (SQL injection, XSS) on the internet can be interpreted as a language-mixing problem. I won't even go into it, but instead link to one of the many fine blog posts on how type systems can help. http://blog.moertel.com/articles/2006/10/18/a-type-based-solution-to-the-strings-problem "

e.g. i guess string types need attributes: safe or unsafe; unicode encoding

" I think this kind of nitpicky detail is just the sort of thing that type systems are especially good at getting right, but they only help if you're careful to use a new type at each language boundary: "ok, this string is the content of the mail message, and now we transform that into the mbox language..."

Maybe the mental model I use, of always associating a given string with a given language will help you keep things straight, like the implicit numeric base we use with numbers: "123" implicitly means "123 in base 10". The next time you have a user's name and want to print it, hopefully you'll think "I need to convert that from base unicode into base bytestring suitable for stdout." "

---

garbage collection for things other than memory ('resource management'):

" The language prominently features garbage collection. Garbage collection for memory today is practically a given. You can't appeal to Java programmers without it. But collecting garbage is not about just memory objects. In real programs, there are many other forms of garbage to recycle: temporary files, open files, locks, threads, open network connections, ... Furthermore, the requirements placed on the garbage collector may vary from application to application. Providing a garbage collector by default is good, providing one that is implemented in the library, that you can taylor to your needs and [gasp] apply to non-memory objects would be so much better... In short, is Go's garbage collection worth the prominent position that its designers gave to it in the presentations? I don't think so personally. "

---

generics

---

"

    I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years." -- Tony Hoare

---

"It turns out that eval() is one of the key things that gets in the way of performance optimizations. It can easily invalidate all sorts of otherwise reasonable assumptions about variable bindings, stack addresses and so on. It's also pretty important, so you can't just get rid of it." -- http://steve-yegge.blogspot.com/2007/02/next-big-language.html


" Here's a short list of programming-language features that have become ad-hoc standards that everyone expects:

    Object-literal syntax for arrays and hashes
    Array slicing and other intelligent collection operators
    Perl 5 compatible regular expression literals
    Destructuring bind (e.g. x, y = returnTwoValues())
    Function literals and first-class, non-broken closures
    Standard OOP with classes, instances, interfaces, polymorphism, etc.
    Visibility quantifiers (public/private/protected)
    Iterators and generators
    List comprehensions
    Namespaces and packages
    Cross-platform GUI
    Operator overloading
    Keyword and rest parameters
    First-class parser and AST support
    Static typing and duck typing
    Type expressions and statically checkable semantics
    Solid string and collection libraries
    Strings and streams act like collections

Additionally, NBL will have first-class continuations and call/cc. I hear it may even (eventually) have a hygienic macro system, although not in any near-term release.

Not sure about threads. I tend to think you need them, although of course they can be simulated with call/cc. I've also noticed that languages with poor threading support tend to use multiprocessing, which makes them more scalable across machines, since by the time you've set up IPC, distributing across machines isn't much of an architectural change. But I think threads (or equivalent) are still useful. Hopefully NBL has a story here. " -- http://steve-yegge.blogspot.com/2007/02/next-big-language.html " Rule 6: Multi-Platform

NBL will run, at a minimum, both standalone and on the JVM. I'm not sure about plans for .NET, but it seems like that will have to happen as well.

And there are two other platforms that NBL will run on which, more than anything else, are responsible for its upcoming dominance, but I'd be giving away too much if I told you what they were. " -- http://steve-yegge.blogspot.com/2007/02/next-big-language.html

---

" The features I've outlined don't make NBL a great language. I think a truly great language would support Erlang-style concurrency, would have a simpler syntax and a powerful macro system, and would probably have much better support for high-level declarative constructs, e.g. path expressions, structural dispatch (e.g. OCaml's match ... with statement) and query minilanguages. Among other things. "

---

" type-inference, auto roll/unroll for recursive types, pack/unpack for existential types, etc). "

---

" I think there’s a good answer to your question in that correspondence between logic and programming. The function, or “implication connective” (aka “->”), is an important tool and ought to feature in any modern language. As well, there are other logical connectives that should be examined. For example, conjunction is common (manifested as pair, tuple, or record types in a programming language), but disjunction (corresponding to variant types) is less common though no less important. Negation (corresponding to continuations consuming the negated type), predicated quantified types, and so on. The tools for building better software are there, but we need to work at recognizing them and putting them to good use. Anybody interested in understanding these logical tools better should pick up a copy of Benjamin Pierce’s book Types and Programming Languages. "

---

"

							KT I guess it depends on what you mean by “DSL”.  Like you say, some people just have in mind some convenient shorthand serialization for data structures (I’ve heard some people refer to a text format for orders as a DSL, for example).  I’m sure that will always be around, and there’s nothing really profound about it.
							On the other hand, by “DSL” you could mean some sub-Turing language with non-trivial semantics.  For example, context-free grammars or makefiles.  Modern programming languages, like Haskell, are often used to embed these “sub-languages” as combinator libraries (the “Composing Contracts” paper by Simon Peyton-Jones et al is a good example of this). "

" If you take monadic parser combinators for example, it’s very attractive the way that they fit together within the normal semantics of Haskell, however you’ve got to go through some severe mental gymnastics to determine for certain what the space/time characteristics of a given parser will be. Contrast that with good old LALR(1) parsers, where if the LR table can be derived you know for certain what the space/time characteristics of your parser will be. "

---

" Another source of concision is OCaml's notation for describing types. At the heart of that notation is the notion of an algebraic datatype. Algebraic datatypes are what you get when you have a system that includes two ways of building up new types: products and sums.

A product type is the more familiar of the two. Tuples, records, structs, and objects are all examples of product types. A product type combines multiple values of different types into a single value. These are called product types because they correspond mathematically to Cartesian products of the constituent types.

A sum type corresponds to a disjoint union of the constituent types, and it is used to express multiple possibilities. Where product types are used when you have multiple things at the same time (a and b and c), sum types are used when you want to enumerate different possibilities (a or b or c). Sum types can be simulated (albeit somewhat clumsily) in object-oriented languages such as Java using subclasses, and they show up as union types in C. But the support in the type systems of most languages for interacting with sum types in a safe way is surprisingly weak. "

--- " ...should be possible to declare 'metadata' in code modules, that is, instances of classes instead of just classes. There should be a way to discover/query this data. "

python is good at this..

---

this comment reinforces my enthusiasm for compiler-supported source code difficulty levels a la hypercard: " Where does this fear of complexity come from? Scala never bit anyone's head off. If you find certain features too complex, don't use them, and mandate that in your department's coding standards. I also consider a Java developer who doesn't knows basic generics worse than mid-level. "

---

reified generics:

"Many people are unsatisfied with the restrictions caused by the way generics are implemented in Java. Specifically, they are unhappy that generic type parameters are not reified: they are not available at runtime. Generics are implemented using erasure, in which generic type parameters are simply removed at runtime. That doesn't render generics useless, because you get typechecking at compile-time based on the generic type parameters, and also because the compiler inserts casts in the code (so that you don't have to) based on the type parameters.

Generics are implemented using erasure as a response to the design requirement that they support migration compatibility: it should be possible to add generic type parameters to existing classes without breaking source or binary compatibility with existing clients" -- http://gafter.blogspot.com/2006/11/reified-generics-for-java.html

---

" Catching multiple exception types: A single catch clause can now catch more than one exception types, enabling a series of otherwise identical catch clauses to be written as a single catch clause. "

" Improved checking for rethrown exceptions: Previously, rethrowing an exception was treated as throwing the type of the catch parameter. Now, when a catch parameter is declared final, rethrowing the exception is known statically to throw only those checked exception types that were thrown in the try block, are a subtype of the catch parameter type, and not caught in preceding catch clauses. "

---

java Automatic Resource Management (ARM) (like Python's with)

---

" immutable data structures, type inferencing, lazy evaluation, monads, arrows, pattern matching, constraint-based programming"

---

http://www.ymeme.com/zmacs-vs-emacs-manual.html

--- " " The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important. " -- http://java.dzone.com/news/i-don%E2%80%99t-much-get-go

--- " Support for internationalization is very important.

    Go: “Check! Check out my samples, I can take Chinese characters in my strings as part of the source code! Oh yes!”
    (I think Go misses the point here. Rich internationalization support goes far beyond Unicode strings. *facepalm* But since Go has zero support for GUIs, little of internationalization really applies.) "

--- oop --- " Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it. " ---

---

the option to turn off boundschecking

--- for interop:

the ability to easily translate data into C-compatible data, e.g. fixed size array of bytes

---

for interop:

safe manual memory management (apparently Rust has this)

---

the option to compile w/o virtual dispatch slowing down the result

---

" Something has got to come along which simplifies development, especially concurrency, and allows programmers of varying skill levels to write productive code in. Then after 5 mil loc, programmers coming and going on a project, an app doesn't end up looking like a fragile mess. "

---

ability to write a C-callable library from it (e.g. without a 'runtime')

---

" To replace C, one must create a language which, first and foremost, can target shared libraries, because that's how most of systems software operates on virtually all current day's operating systems.

This is by far the #1 requirement: extensions to all VMs and scripting languages are shared libraries. Plugins for various servers are shared libraries, heck pretty much everything which is not a web app is a shared library, either on windows or linux.

In my life as a C++ developer (about 7 years) I have never, even once, worked on an executable. All my code, at all companies where I worked, always ran inside of either a DLL or a so-file loaded by some hosting process.

In fact, I believe that so-files must have been the default compilation target for both Go and D. So millions of Ruby/Perl/Python/Java/<anything goes> programmers could have used those languages for performance-critical or OS-dependent code in their programs, after all that's what "systems languages" are for.


jerf 621 days ago

link

It has become increasingly clear to me that C's influence on the systems landscape is even stronger than it would initially appear. Our ability to move beyond what we have now is not merely hampered by the fact that we have to create an entire systems infrastructure, it is hampered by the fact that we always get sucked into having to support C shared libraries in addition to that, and in general that requires certain things to be true of your language, which pulls you into the C local optima. And now you've failed to write a systems language that has moved fundamentally beyond C.

In short, to be able to run a C library means that you language must hand over total control to that C library, who may then dance all over memory, if it chooses. You can't build a system that enforces any further constraints. And you don't have to be an "academic weirdo" language anymore to want to be able to enforce constraints; even something like Go has a significant runtime that you need to work with properly to get the benefits of go, and a C library simply expects to own its thread for as long as it wants, do whatever it wants to allocate memory, use whatever resources it wants, not be scheduled in any way by anything other than the OS, and just this immense laundry list of requirements that all looked OK 30 years ago, but it's increasingly clear that to get to the next step of systems design, some of those are going to have to be modified. And C is not the language that will allow this.

The only language I know that manages to have radically different semantics that C at the deepest levels, yet can still talk to C libraries without (much) compromise of those semantics, is Haskell.

Somehow we've got to escape from C-type linking dictating requirements to the deepest levels of the semantics of the new language, or we're not going to escape from the local optima we are in right now. "


seems like Perl has one of those single predicate switch statements i noted earlier:

"

    use v5.14;
    given ($var) {
    $abc = 1 when /^abc/;
    $def = 1 when /^def/;
    $xyz = 1 when /^xyz/;
    default { $nothing = 1 }
    }"

"

sapphirecat 621 days ago

link

When I think "string handling" in a Perl-vs-Python sense, I think "regex syntax". There are 3 distinct tiers: Perl and Ruby's built-in syntax, PHP's preg_* functions, and finally the Python (and perhaps Java?) Extreme OO style. (In descending order by my preference.)

Python's approach also eliminates the pattern of `if (preg_match(...)) { use captured groups; }` since the match object comes back as a return value instead of either a reference or magic variables, and assignment of that value is illegal inside the if's condition. Very Pythonic, but adds an extra line of code to assign the match separately from testing the result.

(Edited a bunch because I fail at non-markdown.)


tene 621 days ago

link

When working with regular expressions in Perl, the RegularExpressions::ProhibitCaptureWithoutTest? perlcritic policy (default severity 3) helps avoid some common mistakes.

http://search.cpan.org/~elliotjs/Perl-Critic-1.115/lib/Perl/... "


very interesting:

http://dl.rust-lang.org/doc/tutorial.html#the-rust-memory-model

shades of bartosz milewski's system..


" 1. The "cleanup"/"restore invariant" problem has several solutions. The best solution probably looks like a combination of RAII (for things which represent resources), try/finally (for restoring invariants in both error and non-error paths) and try/rollback (to use the previous commenter's name; I don't have a better one...) Oh and IDisposable is OK too when you have "using" to go along with it. I think that Herb Sutter would argue "isn't this just destructors with funny syntax?" and I'd tend to agree. Unfortunately no language has all of these. "

---

" So what does C++11 bring to the table? To mention some:

Safety - Standardized smart pointers, nullptr, better type-safety

Performance - Rvalue references, move semantics, constant expressions with constexpr

Concurrency - Standard facilities for threading, async, futures

Language features - lambda expressions, support for UTF8/16/32, uniform initialization

Libraries - std::chrono for various time handling facilities, standard random number engines and generators, hash tables, regular expressions, tuples

There's also plans to introduce more libraries to C++ next year, and evolve the language through libraries rather than the core language specification. "

---

" That was my first surprise. My second came a couple of hours into the project, when I noticed (allowing for pauses needed to look up new features in Programming Python) I was generating working code nearly as fast as I could type. When I realized this, I was quite startled. An important measure of effort in coding is the frequency with which you write something that doesn't actually match your mental representation of the problem, and have to backtrack on realizing that what you just typed won't actually tell the language to do what you're thinking. An important measure of good language design is how rapidly the percentage of missteps of this kind falls as you gain experience with the language.

When you're writing working code nearly as fast as you can type and your misstep rate is near zero, it generally means you've achieved mastery of the language. But that didn't make sense, because it was still day one and I was regularly pausing to look up new language and library features!

This was my first clue that, in Python, I was actually dealing with an exceptionally good design. Most languages have so much friction and awkwardness built into their design that you learn most of their feature set long before your misstep rate drops anywhere near zero. Python was the first general-purpose language I'd ever used that reversed this process.

Not that it took me very long to learn the feature set. I wrote a working, usable fetchmailconf, with GUI, in six working days, of which perhaps the equivalent of two days were spent learning Python itself. This reflects another useful property of the language: it is compact--you can hold its entire feature set (and at least a concept index of its libraries) in your head. C is a famously compact language. Perl is notoriously not; one of the things the notion “There's more than one way to do it!” costs Perl is the possibility of compactness. "

---

"Each frame is conceptually annotated with a set of continuation marks. A mark consists of a key and its value; the key is an arbitrary value, and each frame includes at most one mark for any key. Various operations set and extract marks from continuations, so that marks can be used to attach information to a dynamic extent. For example, marks can be used to record information for a “stack trace” to be used when an exception is raised, or to implement dynamic scope. 1.1.12 Prompts, Delimited Continuations, and Barriers

            +See Continuations for continuation and prompt functions.

A prompt is a special kind of continuation frame that is annotated with a specific prompt tag (essentially a continuation mark). Various operations allow the capture of frames in the continuation from the redex position out to the nearest enclosing prompt with a particular prompt tag; such a continuation is sometimes called a delimited continuation. Other operations allow the current continuation to be extended with a captured continuation (specifically, a composable continuation). Yet other operations abort the computation to the nearest enclosing prompt with a particular tag, or replace the continuation to the nearest enclosing prompt with another one. When a delimited continuation is captured, the marks associated with the relevant frames are also captured.

A continuation barrier is another kind of continuation frame that prohibits certain replacements of the current continuation with another. Specifically, a continuation can be replaced by another only when the replacement does not introduce any continuation barriers (but it may remove them). A continuation barrier thus prevents “downward jumps” into a continuation that is protected by a barrier. Certain operations install barriers automatically; in particular, when an exception handler is called, a continuation barrier prohibits the continuation of the handler from capturing the continuation past the exception point.

A escape continuation is essentially a derived concept. It combines a prompt for escape purposes with a continuation for mark-gathering purposes. As the name implies, escape continuations are used only to abort to the point of capture. "

-- http://docs.racket-lang.org/reference/eval-model.html#%28tech._continuation._barrier%29


toread http://www.apl.jhu.edu/~hall/Lisp-Notes/Macros.html


advanced macro powers:

http://okmij.org/ftp/Scheme/macros.html

---

http://matt.might.net/articles/metacircular-evaluation-and-first-class-run-time-macros/

runtime macros

---

for 'runtimeless' execution should disallow runtime macros by default

---

" 3D-syntax and hygiene

Some constructs, such as let and letrec, are desugared into other constructs, such as lambda and set!; for example:

 (let ((var exp) ...) body)

becomes:

 ((lambda (var ...) body) exp ...)

This sort of expansion can cause a problem if we use a let construct in a context where lambda has been redefined. For example, we might define a function to compute the energy of a photon:

(define (energy lambda) (let ((c speed-of-light) (h plancks-constant)) (/ (* c h) lambda)))

When the let form expands into a lambda-application this code, the symbol lambda is no longer bound to the syntactic primitive for lambda; rather, it is bound to some numeral representing wavelength. When the evaluator tries to evaluate this code, it will throw a particularly cryptic error about trying to apply a non-function value. This kind of capture is one of the two kinds of "hygiene" violations that Lisp systems worry about, and it is the only one that cannot be solved with gensym.

The provided implementation solves this hygiene problem through 3D syntax. An expression is 3D if a programmer cannot write it down. In other words, it is an expression that must have come from a special syntactic expansion. In Lisp, raw procedures are 3D, because there is no way to write down a literal that the read procedure will pull in as a procedure.

If you examine the eval procedure for the first-class macros implementation, you will find a case not present in the ordinary evaluator: procedure?. When the evaluator hits a procedure, it assumse it takes no arguments and then evaluates it, directly returning whatever that procedure returns.

This behavior provides a way to pass protected values out of first-class macros, since they will be evaluated in whatever scope there was when the closure was born. Consequently, a let from in my implementation (effectively) expands into:

((,(lambda () 3d-lambda) ,@var ,@body) ,@exp)

where 3d-lambda is bound to the syntactic primitive for lambda. "

-- http://matt.might.net/articles/metacircular-evaluation-and-first-class-run-time-macros/

---

" Ask yourself how many times you have written code like this:

int? friendsSonsAge = null; if (person != null && person.Friend != null && person.Friend.Son != null) { friendsSonsAge = person.Friend.Son.Age; }

OK, OK. I was never really good at short contrived code examples but the point is that often you want to know the value of a property buried deep in a hierarchy of objects, each of which may or may not be null. In most mainstream languages (Java, C, Javascript, VB.NET, C#, Ruby) you have no recourse but the ugly boilerplate code shown above.

In C-Omega you can do this:

int? friendsSonsAge = person.Friend.Son.Age;

if (friendsSonsAge != null) do something

If any of the intermediate objects in the expression on the right of the assignment are null the whole expression evaluates to null. This is called null propagation and it works very nicely together with nullable types. I have no idea why this feature wasn't added to C# 3.0. No matter, we can create a macro that duplicates this behavior ourselves. :-) "

---

http://themechanicalbride.blogspot.com/2007/03/runtime-macros-in-c-30.html


http://c2.com/cgi/wiki?RuntimeMacro


http://web.archive.org/web/20070214111958/http://www.linearity.org/bawden/mtt/ First-class Macros Have Types

" In modern Scheme, a macro captures the lexical environment where it is defined. This creates an opportunity for extending Scheme so that macros are first-class values. The key to achieving this goal, while preserving the ability to compile programs into reasonable code, is the addition of a type system. Many interesting things can be done with first-class macros, including the construction of a useful module system in which modules are also first-class.

Here is the paper as it appeared in POPL 2000: In PostScript? (59K).

Here is the current code distribution. Anyone trying to understand this code may find it useful to start by reading this earlier system of mine that uses many of the same techniques for implementing macros, and that is much more extensively commented. "

---

note: real continuations are re-invokable and savable

---

generators

---

coroutines

---

toread http://docs.racket-lang.org/reference/cont.html toread mb http://okmij.org/ftp/continuations/generators.html

toread mb http://okmij.org/ftp/

toread mb http://okmij.org/ftp/continuations/

---

A modern language. Go has everything you’d expect a modern language to have: Unicode, garbage collection, multi-core (GOMAXPROCS), built-in map, string and array types, closures, unit-testing , an official style guide (‘gofmt’), reflection, etc

---

i dont understand this:

" Basically, fold captures the principle of one-step structural induction. However, we often want to recursions that have more complex structure, and emulating them with folds gets yucky. For example, here are two examples which are doable but ugly with folds:

(* We want both 0 and 1-element lists as base cases here *)

fun concat [] = ""

concat [x] = x
concat (x :: xs) = x ^ ", " ^ (concat xs)

(* with foldr, we need a flag to identify the last item *)

fun concat ss = fst (foldr (fn (s, (acc, islast)) => if islast then (s, false) else (s ^ ", " ^ acc, false)) ("", true) ss)

(* We are using a course-of-values recursion here *)

fun fib 0 = 1

fib 1 = 1
fib n + 2 = (fib n) + (fib (n + 1))

(* with folds, you need a higher-order fold to emulate course-of-values *)

fun foldnat s z n = if n = 0 then z else s (foldnat s z (n-1))

fun fib n = let val (_, fib', _) = foldnat (fn (k, fibp, fibpp) => (k + 1, (fn n => if n < k then fibp n else fibp n + fibpp n), fibp)) (0, (fn n => if n > 0 then 1 else 0), (fn n => 0)) n in fib' n end

Things get more annoying when you want to iterate over two collections at once naturally.

This is because the natural way to put two collections together is with a zip, which is an unfold, and since folding over an unfold may fail to terminate this won't typecheck. Unfolds generate elements of greatest fixed points, and folds consume elements of least fixed points. So the typechecker has to rule out (fold f) o (unfold g) because it doesn't know for sure that a call to unfold g will produce something typed by the least fixed point. (Haskell and ML don't care about this issue because they both admit nontermination.)

-*-*-*-

This is all more or less fixable with enough dependent type theory, because you can give a general induction principle more or less equivalent to the Knaster-Tarski theorem, and use it to derive all the specific induction principles you want. There will likely need to be some research done on the scaffolding you need to make this all work in a clean way, though -- today, most type theories are at the spartan end, because proof assistants want simple type theories to get small trusted bases, whereas programming languages call for rich type theories to get small programs.

More transformations in terms of folds

neelk wrote

    Basically, fold captures the principle of one-step structural induction.
    However, we often want to recursions that have more complex structure, and
    emulating them with folds gets yucky. For example, here are two examples which
    are doable but ugly with folds: 

Well, these particular examples are actually quite easily doable with folds.

(* We want both 0 and 1-element lists as base cases here *) fun concat [] = ""

concat [x] = x
concat (x :: xs) = x ^ ", " ^ (concat xs)

With fold (and using OCaml), we write just as easily:

let concat = function

[] -> ""
[x] -> x
(x::xs) -> List.fold_left (fun seed a -> seed ^ ", " ^ a) x xs
  1. concat [];;
  1. concat ["a"];;
  1. concat ["a";"b";"c"];;

The second example:

(* We are using a course-of-values recursion here *) fun fib 0 = 1

(* with folds, you need a higher-order fold to emulate course-of-values *)
fib 1 = 1
fib n + 2 = (fib n) + (fib (n + 1))

actually, no. A simple fold suffices:

let rec foldnat s z n = if n = 0 then z else s (foldnat s z (n-1));;

let fib n = fst (foldnat (fun (a,b) -> (a+b,a)) (1,0) n);;

  1. fib 0;;
  1. fib 1;;
  1. fib 2;;
  1. fib 8;;
    Things get more annoying when you want to iterate over two collections at once
    naturally. 

Not that annoying:

How to zip folds: A library of fold transformers (streams)

Once we know how to get takeWhile and dropWhile via folds, the rest is quite trivial. Essentially, we need a way to `split a stream', which is doable for any stream represented as fold -- and even represented as a monadic fold (where the production of each element may incur a side effect, and we have to be careful not to repeat the side-effects).

"

---

toread on non-Turing complete (Turing incomplete) languages:

Total Functional Programming http://lambda-the-ultimate.org/node/2003

lampwww.epfl.ch/~mcdirmid/papers/mcdirmid06turing.pdf "Turing Completeness Considered Harmful"

http://en.wikipedia.org/wiki/Charity_%28programming_language%29

https://www.google.com/search?client=ubuntu&channel=fs&q=%22Turing+Completeness+Considered+Harmful%22&ie=utf-8&oe=utf-8#hl=en&client=ubuntu&hs=Jcx&tbo=d&channel=fs&sclient=psy-ab&q=%22turing+incomplete%22&oq=%22turing+incomplete%22&gs_l=serp.3..0i10i20j0i10l3.117997.120888.0.121000.19.16.0.0.0.0.360.3662.0j4j8j3.15.0.les%3B..0.0...1c.1.JJxO-x7wORY&pbx=1&bav=on.2,or.r_gc.r_pw.r_cp.r_qf.&bvm=bv.41248874,d.dmQ&fp=8e0d657e9b9006fd&biw=1340&bih=1984

" http://www.swan.ac.uk/compsci/research-2011/seminars/seminar.php?seminar=352

While-programs are Turing-incomplete for non-strict oracles Abstract

We show that while-loops (or primitive recursion + unbounded search) form a less powerful programming concept than recursion when oracles accepting partially defined inputs are present. This implies that in a non-strict functional programming language while-programs cannot replace recursive programs with regard to the computation of functionals of type two. "

http://www.researchgate.net/publication/221055384_DSLTrans_A_Turing_Incomplete_Transformation_Language

coq is turing-incomplete coq.inria.fr

http://en.wikipedia.org/wiki/Turing_completeness#Non-Turing-complete_languages

" In English (rather than CompSciSpeak?) it is common to say that a language "lacks certain features", and arguably that it is therefore "not complete" by comparison with another language which has them. One might counter-argue that it is possible to implement closures in C. One can for example write a C program which is a Lisp interpreter, and embed in it a Lisp program as a string. Voila, closures in C. However, this is not what most people are asking for if they say, "I wish C had closures". If you think every language needs closures, then C is incomplete. If you think every language needs structured programming, then ARM assembler is incomplete. If you think it should be possible to dynamically add methods to an object, then C++ is incomplete, even though it's perfectly possible to write a C++ class with "AddMethod?" and "CallMethodByName?" methods and fake up your own method calling convention from there. And so on. "

" The answer is a most definite yes. Turing completeness only implies that a Turing complete language can be used to compute any computable function. For one, it doesn't say anything about the complexity of a computation.

You can usually expect that any polynomial time algorithm can be expressed as a polynomial time algorithm in any other Turing complete language, but that's about it. Especially any real time requirements (soft or hard) go out of the window, if your only focus is Turing completeness.

...

As for operating systems, an interface to the hardware is a must, but any language can be fitted with such utilities. "

epigram has a Turing-incomplete subset

PR-Hume:

" 1.2.1 Hume Levels Rather than attempting to apply cost modelling and correctness proving technology to an existing language framework either directly or by altering the language to a greater or lesser extent (as with e.g. RTSj [10]), our approach is to design Hume in such a way that we are certain that formal models and proofs can be constructed. We identify a series of overlapping Hume language levels shown in Figure 1.1, where each level adds expressibility to the expression semantics, but either loses some desirable behavioural property or increases the technical difficulty of providing formal correctness/cost models. These levels are: HW-Hume: a hardware description language — capable of describing both synchronous and asynchronous hardware circuits, with pattern matching on tuples of bits, but with no other data types or operations [27]; FSM-Hume: a hardware/software language — HW-Hume plus first-order functions, conditionals expressions and local definitions [26]; Template-Hume: a language for template-based programmimng — FSM-Hume plus predefined higher-order functions, polymorphism and inductive data structures, but no user-defined higher-order functions or recursive function definitions; PR-Hume: a language with decidable termination — Template-Hume plus user-defined primitive recursive and higher-order functions, and inductive data structure definitions; Full-Hume: a Turing-complete language — PR-Hume plus unrestricted recursion in both func- tions and data structures. "

http://en.wikipedia.org/wiki/Hume_%28language%29

BlooP? (short for Bounded loop)

i think agda might be a total language too, not sure

"

We can further define a programming language in which we can ensure that even more sophisticated functions always halt. For example, the Ackermann function, which is not primitive recursive, nevertheless is a total computable function computable by a term rewriting system with a reduction ordering on its arguments (Ohlebusch, 2002, pp.67). "

http://en.wikipedia.org/wiki/Structured_program_theorem

http://en.wikipedia.org/wiki/Primitive_recursive_function

http://news.ycombinator.com/item?id=2413816

http://sm.reddit.com/r/learnprogramming/comments/134azp/haskell_scala_python_racket/

http://forums.xkcd.com/viewtopic.php?f=11&t=73104

http://www.emilmont.net/doku.php?id=software_engineering:languages

http://matt.might.net/articles/best-programming-languages/


http://lambda-the-ultimate.org/node/2550


http://www.scala-lang.org/node/122


allowing xml literals in code: http://www.scala-lang.org/node/131

---

how to bind an unbound fn in Python:

" All functions are also descriptors, so you can bind them by calling their __get__ method:

bound_handler = handler.__get__(self, MyWidget?)

Here's R. Hettinger's excellent guide to descriptors. "

e.g.

class A: pass

a = A()

def b(self): self.c = 1

a.b = b.__get__(a)

a.b()

In [18]: a.c Out[18]: 1

to unbind,

a.b.__func__

---

http://val.markovic.io/blog/youcompleteme-a-fast-as-you-type-fuzzy-search-code-completion-engine-for-vim

--- this Python idiom is easy to read:

    result = {}
    for (customer, ordersFromThisCustomer) in customers, ordersByCustomer:
        result[customer] = statsFromCustomersOrders(ordersFromThisCustomer)

lexical scoping

shallow binding dynamic scoping

deep binding dynamic scoping


python-style immutable closures

---

" But, you can actually also use the keyword static between the brackets 1:

void bar(int myArray[static 10]);

This tells the compiler that it should assume that the array passed to foobar has at least 10 elements. (Note that this rules out a NULL pointer!)

Doing this serves two purposes:

    The compiler could use this information when optimizing the code 2
    The compiler can warn callers when it sees them calling the function with anything but an array of 10 or more ints."

--- from my Release It! notes;

131 asynchronous connections between systems are more stable (but more substantially complicated) than synchronous ones; e.g. pub/sub message passing is more stable than synchronous request/response protocols like HTTP. a continuum: in-process method calls (a function call into a library; same time, same host, same process), IPC (e.g. shared memory, pipes, semaphores, events; same host, same time, different process), remote procedure calls (XML-RPC, HTTP; same time, different host, different process -- my note -- this continuum neglects the RPC/REST distinction (one part of which is statelessness of HTTP), messaging middleware (MQ, pub-sub, smtp, sms; different time, different host, different process), tuple spaces

--- finally a clear explanation of tuple spaces:

http://software-carpentry.org/blog/2011/03/tuple-spaces-or-good-ideas-dont-always-win.html

" Tuple Spaces (or, Good Ideas Don't Always Win)

I've resisted adding a module on high-performance computing to this course for a lot of reasons: I think other things are more important, there's enough coverage elsewhere, the software is hard for novices to set up... But there's another reason, one that may not be as good, but still has a seat at the table. Deep down, the reason I'm reluctant to teach MPI (the de facto standard for parallel programming) is that there's a much better model out there, one that works on all kinds of hardware, is comprehensible to novices, and delivers good performance on a wide range of problems. Its name is tuple space, its most famous implementation is Linda, and unfortunately, for a lot of reasons that I still don't understand, it somehow became an "also ran" in parallel programming.

How easy is Linda? The examples in this article, and this well-written little book, are pretty compelling, but since the first is behind a paywall, and the second is out of print, here's a short overview. A tuple space is, as its name suggests, a place where processes can put, read, and take tuples, which are in turn just sequences of values. ("job", 12, 1.23) is a tuple made up of a string, an integer, and a floating-point number; a tuple space can contain zero or more copies of that tuple, or of tuples containing other types of values, simple or complex.

A process puts something in tuple space with put(value, value, ...). It can take something out with take(...), or copy something (leaving the original in tuple space) with copy(...). The arguments to take(...) and copy(...) are either actual values, or variables with specific types; values match themselves, while types match things of that type. For example:

    put("job", 12, 1.23) puts the tuple ("job", 12, 1.23) in the tuple space
    if f is a floating point variable, take("job", 12, ?f) takes that tuple out of tuple space, assigning 1.23 to f
    but take("job", 15, ?f) blocks, because there is no tuple in tuple space matching the pattern (12 doesn't match 15)
    and if i is an integer variable, copy("job", ?i, ?f) assigns 12 to i and 1.23 to f, but leaves the tuple in tuple space.

There are non-blocking versions of take(...) and copy(...) called try_take and try_copy (the names vary from implementation to implementation) that either match right away and return true, assigning values to variables in their patterns, or fail to match, don't do any assignment, and return false. There is also eval(...), which takes a function and some arguments as parameters and creates a new process. Whatever (tuple of) values that function returns when it finishes executing is then put in tuple space—this is how one initial process can spawn many others.

http://software-carpentry.org/files/2011/03/tuplespace.png

And that's it. That's the whole thing. It's easy, easy, easy for beginners to understand—much easier than MPI. And compile-time analysis of tuple in/out patterns can make it run efficiently in most cases; adhering to some simple patterns can help too. But for a whole bunch of reasons, it never really took off: not as a language extension to C, not as JavaSpaces?, not in various homebrew implementations for agile languages like Python, and that makes me sad. It's as if the metric system had failed, and we had to do physics with foot-acres and what-not. But I guess that's the world we live in... "

here's something i (bayle) wrote about them for the c2.com wiki:

The structure has six primitive operations: put, copy, take, try_copy, try_take, and eval. put places a tuple into the bag. copy finds and reads a tuple. take is like copy but also removes a tuple after reading it. Copy and find block if they cannot find a tuple matching the query; try_copy and try_find are the non-blocking versions. Eval forks a new process. ---

a good defn of tuple from c2 wiki:

"A tuple is a fixed fixed-length list containing elements that need not have the same type. It can be, and often is, used as a key-value pair. "


lists of concurrency constructs:

http://assets.en.oreilly.com/1/event/27/A%20Survey%20of%20Concurrency%20Constructs%20Presentation.pdf

http://weblog.plexobject.com/?p=1634


"

    Fast asynchronous message passing.

This is what ZeroMQ? gives you. But it gives you it in a form different to that of Erlang: in Erlang, processes are values and message-passing channels are anonymous; in ZeroMQ?, channels are values and processes are anonymous. ZeroMQ? is more like Go than Erlang. If you want the actor model (that Erlang is based on), you have to encode it in your language of choice, yourself. "

http://www.rabbitmq.com/blog/2011/06/30/zeromq-erlang/

" As far as selective message reception goes, the dual of (Erlang) "from a pool of messages, specify the types to receive" is (Go/ZMQ) "from a pool of channels, specify the ones to select". One message type per channel. "


Ad-hoc Shared State for Web Applications Jack Jansen

looks interesting but i cant get all the ideas from the slides


Ruby refinements:

http://www.rubyinside.com/ruby-refinements-an-overview-of-a-new-proposed-ruby-feature-3978.html

" In a nutshell, refinements clear up the messiness of Ruby's monkey patching abilities by letting you override methods within a specific context only. Magnus Holm has done a great write up of the concepts involved, and presents a good example:

module TimeExtensions? refine Fixnum do def minutes; self * 60; end end end

class MyApp? using TimeExtensions?

  def initialize
    p 2.minutes
  endend

MyApp?.new # => 120 p 2.minutes # => NoMethodError? "

hmm... seems to me that this could be accomplished with prototype inheritance and namespaces, e.g. the namespace allows you to override the 'usual' prototype with your own, but only within a module.

but is a value's refinement-ness determined once-and-for-all at initialization, or when it is used? if the latter, then it's a little different from prototype inheritance.

http://yehudakatz.com/2010/11/30/ruby-2-0-refinements-in-practice/

ok, it's the latter. the refinement is not a property of the value, rather, it is lexically scoped, except that "refinements are inherited from the calling scope when using instance_eval". not sure what that means but see http://timelessrepo.com/refinements-in-ruby . oh, it's just Ruby's eval: http://4loc.wordpress.com/2009/05/29/eval-module_eval-and-instance_eval/

---

the following series on monads was recommended to me by a friend:

---

" functions are first-class values: many other features that define functional programming – immutable data, preference for recursion over looping, algebraic type systems, avoidance of side effects – are entirely absent. And while first-class functions are certainly useful, and enable users to program in functional style should they decide to, the notion that JS is functional often overlooks a core aspect of functional programming: programming with values.

‘Functional programming’ is something of a misnomer, in that it leads a lot of people to think of it as meaning ‘programming with functions’, as opposed to programming with objects. But if object-oriented programming treats everything as an object, functional programming treats everything as a value – not just functions, but everything. This of course includes obvious things like numbers, strings, lists, and other data, but also other things we OOP fans don’t typically think of as values: IO operations and other side effects, GUI event streams, null checks, even the notion of sequencing function calls. If you’ve ever heard the phrase ‘programmable semicolons’ you’ll know what I’m getting at. "

---

http://dfellis.github.com/queue-flow/2012/09/22/why-queue-flow/

---

" The Uniform Access Principle (UAP) was articulated by Bertrand Meyer in defining the Eiffel programming language. This, from the Wikipedia Article pretty much sums it up: “All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation.”

Or, alternatively, from this article by Meyer: “It doesn’t matter whether that query is an attribute (representing an object field) or a function (representing a computation); this is an internal representation decision, irrelevant to clients accessing objects through calls such as [ATTRIB_ACCESS]. This “Principle of Uniform Access” — it doesn’t matter to clients whether a query is implemented as an attribute or a function” "

---

" Lack of Struct Immutability

By “immutability” I mean that there’s no way for a framework to prevent changes to structures that are supposed to be managed by the framework. This is related to Go’s non-conformity to the UAP (above). I don’t believe there’s a way to detect changes either.

This can probably be done by implementing immutable/persistent data structures along the line of what Clojure did, but these will not have literal forms like Arrays and Maps do now in Go. Once again, this is, in part, an issue of non-idiomatic use of Go. "


named arguments, default arguments

---

"

Unyielding Enforcement of ‘Unused’ Errors

The implementors of Go have made a couple of rules that I’m very happy that they’re enforcing. Two in particular: “there are no compiler warnings, only errors”, and “unused things (variables, imports, etc.) are errors”.

The annoying part is that a Go programmer can’t get around this even temporarily. So if there’s something they are trying to experiment with (i.e. figure out how it works) it isn’t always possible to just comment out surrounding code. Sometimes the commenting makes variables or import statements unused… and it won’t compile. So the programmer has to comment out still more code. This can happen a few times before it successfully compiles. Then the programmer has to uncomment the stuff just commented out. Error prone, messy, etc.

There are hacky ways to get around this, but these allow unused things to end up in the final code if programmers are careless/forgetful/busy/rushed.

This is really just an annoyance but it stands out, to me at least. Here’s a compiler that comes with tools for sophisticated functions like formatting, benchmarking, memory and CPU profiling, and even “go fix” yet they leave us temporarily commenting out code just to get it to compile? Sigh. "

e.g. temporarily commenting out code should not cause it not to compile

---

Weak References

---

---

perl's <> operator

---

concise expression of operations on nested structures, and nested structure literals

" "Out of nowhere"? One of the perennial topics of the mid to late 90s was the idea that Perl needed major fixes to remove common pitfalls (e.g. the object model, syntax warts (nested structures are just horrid), etc) and be more suitable for large projects. "

---

arrays

lists

maps

strings

---

object system

---

concise AOP summary: http://stackoverflow.com/questions/4829088/java-aspect-oriented-programming-with-annotations

---

http://www.cs.rice.edu/~taha/MetaOCaml/

" MetaOCaml? is a multi-stage extension of the OCaml programming language, and provides three basic constructs called Brackets, Escape, and Run for building, combining, and executing future-stage computations, respectively. (Please read README-META file in distribution for MetaOCaml?'s syntax for these constructs). "

" 1.2 The Three Basic MSP Constructs We can illustrate how MSP addresses the above problems using MetaOCaml? [2], an MSP extension of OCaml [9]. In addition to providing traditional im- perative, object-oriented, and functional constructs, MetaOCaml? provides three constructs for staging. The constructs are called Brackets, Escape, and Run. Using these constructs, the programmer can change the order of evaluation of terms. This capability can be used to reduce the overall cost of a computation. Brackets (written .< ... >. ) can be inserted around any expression to delay its execution. MetaOCaml? implements delayed expressions by dynamically gener- ating source code at runtime. While using the source code representation is not the only way of implementing MSP languages, it is the simplest. The following short interactive MetaOCaml? session illustrates the behavior of Brackets 1 :

  1. let a = 1+2;; val a : int = 3
  2. let a = .<1+2>.;; val a : int code = .<1+2>. Lines that start with
  3. are what is entered by the user, and the following line(s) are what is printed back by the system. Without the Brackets around 1+2 , the addition is performed right away. With the Brackets, the result is a piece of code representing the program 1+2 . This code fragment can either be used as part of another, bigger program, or it can be compiled and executed. 1 Some versions of MetaOCaml? developed after December 2003 support environment classifiers [21]. For these systems, the type int code is printed as (’a,int) code . To follow the examples in this tutorial, the extra parameter ’a can be ignored. 2 In addition to delaying the computation, Brackets are also reflected in the type. The type in the last declaration is int code . The type of a code fragment reflects the type of the value that such code should produce when it is executed. Statically determining the type of the generated code allows us to avoid writing generators that produce code that cannot be typed. The code type construc- tor distinguishes delayed values from other values and prevents the user from accidentally attempting unsafe operations (such as 1 + .<5>. ). Escape (written .~ ...) allows the combination of smaller delayed values to con- struct larger ones. This combination is achieved by “splicing-in” the argument of the Escape in the context of the surrounding Brackets:
  4. let b = .<.~a * .~a >. ;; val b : int code = .<(1 + 2) * (1 + 2)>. This declaration binds b to a new delayed computation (1+2)*(1+2) . Run (written .! ...) allows us to compile and execute the dynamically generated code without going outside the language:
  5. let c = .! b;; val c : int = 9 Having these three constructs as part of the programming language makes it possible to use runtime code generation and compilation as part of any library subroutine. In addition to not having to worry about generating temporary files, static type systems for MSP languages can assure us that no runtime errors will occur in these subroutines (c.f. [17]). Not only can these type systems exclude generation-time errors, but they can also ensure that generated programs are both syntactically well-formed and well-typed. Thus, the ability to statically type-check the safety of a computation is not lost by staging. "

---

clojure's & destructuring:

(defn blah [& [one two & more]] (str one two "and the rest: " more))

  1. 'user/blah user> (blah 1 2 "ressssssst") "12and the rest: (\"ressssssst\")"

and map destructuring:

user> (defn blah [& {:keys [key1 key2 key3]}] (str key1 key2 key3))

  1. 'user/blah user> (blah :key1 "Hai" :key2 " there" :key3 10) "Hai there10"

---

implicit 'main' like Python

---

interceptors (like monads; 'programmable semicolons')

---

implicit EOL

--

custom implicit typecasting (e.g. scala)

--

Icon's success/failure control flow ('goal-directed execution')

---

fork/exec (apparently "The JVM cannot fork/exec, so the many-small-programs design is a non-starter. Instead, the Java tools usually favor some sort of plug-in architecture, which is a great idea in theory but hard to get right in practice." -- http://stuartsierra.com/2011/08/30/design-philosophies-of-developer-tools)

---

first-class signatures

first-class interfaces

--- javascript 'with' to dynamically extend the scope chain:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/with

note: this seems to be disliked, because if you insert an object in the scope chain, then when you refer to a symbol, which scope the symbol is found in is decided dynamically -- e.g. you put object Bob in the chain and refer to 'name', as shorthand for 'Bob.name', but actually it's Bob.fullname... and if a 'name' is defined somewhere else you get the wrong thing.

Doug dislikes this but would be okay with it if it was a typesafe compiled language in which the scope that binds any given reference under a 'with' was statically known.

---

dicts that can have any hashable object as keys, not just strings, and a std lib that uses them

---

dicts that can have mutable objects as keys, not just constants, and a std lib that uses them

---

a std library that uses iterators, not just lists, for everything

---

generics

---

generics with constraints

--

logic programming

--

function scoping as in JS's 'hoist'ing

note: this seems to be disliked. if a language has blocks, ppl seem to prefer block scoping.

--

event listener DE-registration, because registered event listeners which are methods of any object block that object from being garbage collected

--

JS's 'this'

note: this seems to be disliked, probably b/c it's dynamically scoped. ppl use it indirectly to define a 'self' for objects but you have to explicitly say 'self = this' in the constructor, which is confusing and annoying. 'this' can be arbitrarily set by the caller when using .apply() or .call(), so we can't just use 'this' in methods. See http://www.digital-web.com/articles/scope_in_javascript/

also, if you define an inner function in a method, you can't get to the method's 'this' because the function's own 'this' shadows it. ppl want a 'self' that is object-scoped. Ecmascript 6 fixes this by with the addition of 'block lambdas', which don't have their own 'this'.

so, i guess another headache of the dynamic scoped automatic stuff with 'this' is that if you want to access the value of 'this' in an outer static block, you can't, because it's not static, it's dynamic.

hmm, actually, i dont really see why ppl dont like it, it seems much like Python's 'self', only implicit, and only (i think) without a distinction between an 'object' and a variable in a lookup table. the trouble with inner functions seems to come from this implicitness and lack of distinction; you can't distinguish between an inner function and a subobject, so the inner function's 'this' gets shadowed; and you can't prevent it just by renaming 'self', because you don't choose self's name. the trouble with comprehension also seems to come from this implicitness. ironic, since ppl (including me) complain about that explicitness.

--

an object-scoped 'self' or 'me'

--

attributes, e.g. ability to mark an object field as "Serializable"

--

an SQL-like query interface over primitive collection types see also LINQ

-- random stuff to learn about the isomorphism between

http://link.springer.com/chapter/10.1007/3-540-45061-0_68#page- http://pdf.aminer.org/000/267/130/minimal_classical_logic_and_control_operators.pdf

on the theory of various control operators:

A Library of High Level Control Operators (1993) by Christian Queinnec

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.4790 ~/papers/programmingLanguages/libraryOfHighLevelControlOperators.pdf

"shift to control" https://www.cs.indiana.edu/l/www/ftp/techreports/TR600.pdf#page=103 (~/papers/programmingLanguages/proc5thWorkshopSchemeAndFunctionalProgramming.pdf PDF page 103, "shift to control") (cites previous)

Adding delimited and composable control to a production programming environment http://dl.acm.org/citation.cfm?id=1291178 http://users.eecs.northwestern.edu/~robby/pubs/papers/icfp2007-fyff.pdf ~/papers/programmingLanguages/practicalDelimitedAndComposableControl.pdf (cites previous)

The Theory and Practice of Programming Languages with Delimited Continuations http://repository.readscheme.org/ftp/papers/plsemantics/danvy/db_thesis.pdf (cites shift to control)

Abstracting control O Danvy, A Filinski - Proceedings of the 1990 ACM conference on LISP …, 1990 - dl.acm.org Abstract The last few years have seen a renewed interest in continuations for expressing advanced control structures in programming languages, and new models such as Abstract Continuations have been proposed to capture these dimensions. This article investigates ... Cited by 289 Related articles All 19 versions Cite

Representing control: A study of the CPS transformation O Danvy, A Filinski - Mathematical structures in computer …, 1992 - Cambridge Univ Press Abstract This paper investigates the transformation of λ ν-terms into continuation-passing style (CPS). We show that by appropriate η-expansion of Fisher and Plotkin's two-pass equational specification of the CPS transform, we can obtain a static and context-free ... Cited by 307 Related articles All 16 versions Cite

http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.33.7802

---

http://lambda-the-ultimate.org/node/4786

Extensible Effects -- An Alternative to Monad Transformers by Oleg Kiselyov, Amr Sabry and Cameron Swords:

---

syntax: allow multiple symbol names under one declaration. example from Coq:

" As a notational convenience, if two or more arguments have the same type, they can be written together. In the following definition, (n m : nat) means just the same as if we had written (n : nat) (m : nat).

Fixpoint mult (n m : nat) : nat := match n with

O => O
S n' => plus m (mult n' m)
  end."

--

pattern matching on multiple expressions at once. example from Coq:

" You can match two expressions at once by putting a comma between them:

Fixpoint minus (n m:nat) : nat := match n, m with

O , _ => O
S _ , O => n
S n', S m' => minus n' m'
  end."

--

use of _ for dummy variables

--

" the Uniform Access Principle states that “All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation.” "

-- GADTs:

http://www.haskell.org/haskellwiki/GADTs_for_dummies

http://en.wikibooks.org/wiki/Haskell/GADT "Basically, they allow you to explicitly write down the types of the constructors."

http://en.wikibooks.org/wiki/Haskell/GADT

http://lambda-the-ultimate.org/node/1293

http://www.haskellforall.com/2012/06/gadts.html (this one is very technical) --

"

Keep in mind that the unification above works because there is a unique isomorphism between the types and identity functions in a programming language. This is the essential criteria that justifies unifying two constructs in a programming language.

When one syntactically unifies constructs that are conceptually distinct, the result is less justifyable.

One example is the unification of functions and lists in LISP - which creates some very interesting possibilities for introspection, but it means that functions carry around a lot of observable metadata that breaks foundational properties like parametricity and extensional equality.

Another example is Java and C#'s unification of value types (like int) and object types (like Int). Though C#'s approach is more automatic, both create strange observable properties, such as exposing pointer equality on boxed Int's that differs from the underlying int equality.

In the long-run, such unification of disparate concepts will be recognized as "clever hacks" rather than valid programming language design practices. " http://lambda-the-ultimate.org/node/1085#comment-11645

--

tail call optimization (== tail call elimination) == TCO == TCE

--

clojure's 'recur' directive in place of TCO

---

http://lambda-the-ultimate.org/node/3166

"What Are The Resolved Debates in General Purpose Language Design?

In the history of PL design there has been quite a bit of vitriol spilled over design choices. These days bringing up "static vs dynamic typing" is a good way to generate some heat on a cold winter web forum.

But a few debates seem to have been pretty firmly resolved, at least for general purpose languages. GP languages have considered goto harmful for decades now, dynamic scoping is seen as a real oddity, and manual memory management is a choice only made by systems language designers.

Questions for LtU?: What other debates have been "resolved" in the sense that general purpose languages are hardly ever designed with an alternative to the consensus? What debates will be resolved in the near future? And what apparently resolved debates deserve to be reopened?

One rule: "static vs dynamic typing" will be considered off topic because clearly and emphatically that debate hasn't been resolved and has no prospect of being resolved any time soon. Also, it's boring. "

" First-class functions. Memory safety. " (some argument on FCF in the thread below)

" 1. Structured programming (Goto considered harmful) 2. Lexical Scoping "

but: "Besides CL and Perl, Clojure includes a (thread local) dynamically scoped binding construct as part of the language and Scala has one in its library. So the resolved issue is, as you say, what's the default rather than whether it should exist or not."

"

Here's a few:

Might not think about these because we don't argue about them.

    Numbers (at least small integrals and finite-precision reals)
    Arithmetic (sum, product, division, modulo)
    Structured Aggregate Data (at least one mechanism to glob small bits of data into larger structures)... but note that immutability of these structures is NOT resolved.
    Structured Programs (at least one mechanism, such as procedures, to break down big programs into smaller chunks)
    Conditional Execution (ability to specify that a subprogram execute only in certain conditions)
    Recursive Definitions (ability for a procedure, function, or method to ultimately call back into itself, including co-recursion)
    Dynamic Memory Allocation & Management (excepting in certain embedded domains... ability to fetch more memory on demand and later release it; doing so may be explicit or implicit)
    Pattern Matching - from simple to advanced forms of switch/case statements.
    Modularity - breaking down large projects into bite-sized chunks accessed through an interface (usually a set of shared names)
    Named Variables - ability to assign something to a name at runtime then use that name to access that something; issue of single-assignment vs. mutable variables remains.

Here's a few stubborn old entries that I believe aren't yet resolved to everyone's satisfaction:

    Exception Handling (and especially Java-style 'checked' exceptions, which some claim even more 'evil' than the regular exceptions)
    Reflection
    Concurrency (many languages still don't acknowledge it, nobody agrees on how to do it)
    Representing IO
    Macros/Extensible Language/Operator Overloading (much holy war fodder between language designers here, concerns regarding maintainability of code by newcomers to projects)
    Syntax, especially Syntax of comments :-)

There are tons more issues that simply don't come up that often (security in languages is a major focus of E and my own language, language integrated queries are on the new side but are regarded as pretty cool, resumable exceptions are wicked nice but nobody but to my knowledge only Lisp really has them, multi-methods, open functions, open data types, aspect-oriented programming, dataflow constructs, versioning/transactions/undo-capabilities, etc. simply don't come up in enough languages to be subject to widespread debate. By dmbarbour at Tue, 2009-01-13 16:59

"
login or register to post comments

" unicode strings "

" XPath/XQuery " (most ppl disagree but some like it, tho dont consider it 'resolved')

"Constants "

" Block scope "

" Resolved to be deserved to be resolved.

1. High level module system. Functor, Functor-Functors. 2. Module / Component dependency and composition wiring. 3. Symmetrical, turtles all the way down, full control of Opaque or Transparent Types. 4. Macros. 5. Immutability. With concise syntax for copy constructing new instances with mutated field values (Records/Objects/Tuples) 6. Opting for Nullessness. 7. Contracts with blame. 8. Pattern Matching 9. Tail Recursion 10. Continuations (even better serializable continuations) 11. Seamless and simple, dynamic compilation and loading. 12. Named and Default Parameters. 13. Currying 14. Lazyness / Call-By-Name capable but not enforced. 15. Well behaved multiple inheritance / mixins. 16. Introspection and Reflection. 17. Runtime typing annotations. 18. Closures. 19. First Class Functions, Classes, Modules. 20. Component Versioning. 21. Optional Effects System. "

(settled as bad, i assume): " `typeless' variables as in B. self-modifying code. Wierd lexical schemes like APL. `natural' English like FLOW-MATIC and COBOL. "

" Whitespace as a token separator "

"

Resolved negatively

Structural code editors (i.e. that would not let you make code that doesn't parse) failed. Turns out that today's style of IDEs, which let you write whatever you want and simply highlight errors and have good error recovery, have won.

Similarly, graphical programming (stitching together boxes and arrows) has been shown to be far inferior than good-ole textual programming, relegated to UML and design tools that generate boilerplate code. By Ben L. Titzer at Wed, 2009-01-14 20:55

"
login or register to post comments

"

Static syntax

I agree with David Barbour's comment that syntax isn't a settled matter. Beyond that, I think this is a place that there should be space for different approaches.

But one very valuable property of syntax does seem to be accepted wisdom: you should be able to lex/parse the syntax without running the program. This was not always the case: there are some legacy languages out there where the dynamics of the reader could be manipulated during program execution. TeX?, a cornucopia of both good rendering algorithms and bad PL design decisions, is probably the most important such language.

Postscript: Following the mention of languages such as Katahdin & Converge in the Macro Systems thread, I see that the above isn't and shouldn't be accepted wisdom, since there is interest in embedding DSLs with user defined grammars & the whole enterprise isn't incoherent. I'll refine my claim to say there should be a well-behaved separation between syntax and evaluation of code, such as is violated on a grand scale by TeX?, and also, in a less grand sense, by some macro expander languages such as m4. By Charles Stewart at Fri, 2009-01-16 12:35

"
login or register to post comments