proj-oot-old-150618-ootOopNotes1

http://journal.stuffwithstuff.com/2010/09/18/futureproofing-uniform-access-and-masquerades/

points out three kinds of futureproofing in java:

these are all annoying boilerplate, but they are all good things to do because if you don't, and you want to make one of the following changes, then you must change every call site, instead of just one line. This is especially bad if your program is a shipped library and the call sites are in client code (e.g. you would have to make breaking/incompatible changes to your library):

now, Oot already deals with the first two of these, and maybe the third. If not, We should probably deal with the third, too. That is to say, when you call a constructor (if we have constructors at all; i'm leaning towards yes), you don't actually determine an implementation but merely an interface, and perhaps a factory method to determine the implementation. In other words, 'everything is an interface', like we always say.

in http://journal.stuffwithstuff.com/2010/10/21/the-language-i-wish-go-was/, he also points out a few more that Go handles that Java doesn't:

in http://journal.stuffwithstuff.com/2010/10/21/the-language-i-wish-go-was/, he also points others kinds of futureproofing that aren't needed in Java but that may be needed in other languages, such as Go:

---

should we have constructors, copy constructors, move constructors? and autodefined repr, equals (equivalence (ad hoc polymorphic)), structequals (true structural equality), memequals (reference equality), hash?

---

could require traits to be stateless and distinguish 'classes' which hold state

---

could have traits each have their own namespace and by default any instance methods and variables defined are in that namespace (note this requires explicit instance variable definitions). Now variable references are first looked up in local scope, then in the trait-local instance scope, and only then in the instance scope. Could also allow traits to declare instance methods and variables in instance scope, but with an explict notation (perhaps just Python self.varName for instance-level variables). If you do that, now two traits can introduce the same name without clashing, unless they intend to clash, without any compile-time checking (except for the trait names themselves, and for method names), and so we have commutative traits. no wait, you still have to compile-time check to prohibit the diamond problem, unless you require all fully-qualified names for inherited stuff.

however this compile-time checking either prohibits or at least doesn't help with dynamic addition of methods, right? so maybe require assertions in this case? that sounds troublesome, how would you upgrade an implementation of an API that used static methods to one using dynamically-generated ones without making the client assert the existence of these? if you use methodMissing, you make the client change to capitalized method names.

note that as i've written this paragraph, it conflicts with the previous suggestions, e.g. as written, traits can declare new instance variables. But you could forbid that, on the idea that even though methods are like variables, method defns are static and so allowed. or could just unify stateful 'classes' and traits.

---

could allow implicit local variable definitions (including ones which become included in closures, e.g. it's okay because it's lexically scoped) but require explicit instance variable definitions (e.g. implicit is not okay here because it's dynamically scoped, e.g. harder to read)

---

classes as 'state domains' to allow tracking of permitted mutable state/non-referential transparency within various functions

or perhaps this should be module-level?!? (modules as state domains, or modules as the domains with which a set of permitted mutable state domains are associated?) i doubt it..

---

" OOP Isn't a Fundamental Particle of Computing The biggest change in programming over the last twenty-five years is that today you manipulate a set of useful, flexible data types, and twenty-five years ago you spent a disproportionately high amount of time building those data types yourself.

C and Pascal--the standard languages of the time--provided a handful of machine-oriented types: numbers, pointers, arrays, the illusion of strings, and a way of tying multiple values together into a record or structure. The emphasis was on using these rudiments as stepping stones to engineer more interesting types, such as stacks, trees, linked lists, hash tables, and resizable arrays.

In Perl or Python or Erlang, I don't think about this stuff. I use lists and strings and arrays with no concern about how many elements they contain or where the memory comes from. For almost everything else I use dictionaries, again no time spent worrying about size or details such as how hash collisions are handled.

I still need new data types, but it's more a repurposing of what's already there than crafting a custom solution. A vector of arbitrary dimension is an array. An RGB color is a three-element tuple. A polynomial is either a tuple (where each value is the coefficient and the index is the degree) or a list of {Coefficient, Degree} tuples. It's surprising how arrays, tuples, lists, and dictionaries have eliminated much of the heavy lifting from the data structure courses I took in college. The focus when implementing a balanced binary tree is on how balanced binary trees work and not about suffering through a tangled web of pointer manipulation.

Thinking about how to arrange ready-made building blocks into something new is a more radical change than it may first appear. How those building blocks themselves come into existence is no longer the primary concern. In many programming courses and tutorials, everything is going along just fine when there's a sudden speed bump of vocabulary: objects and constructors and abstract base classes and private methods. Then in the next assignment the simple three-element tuple representing an RGB color is replaced by a class with getters and setters and multiple constructors and--most critically--a lot more code.

This is where someone desperately needs to step in and explain why this is a bad idea and the death of fun, but it rarely happens.

It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the fundamental particle of computing that some people want it to be. When blindly applied to problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet there's often an aesthetic insistence on objects for everything all the way down. That's too bad, because it makes it harder to identify the cases where an object-oriented style truly results in an overall simplicity and ease of understanding.

(Consider this Part 2 of Don't Distract New Programmers with OOP.) "

---

foundart 3 days ago

link

I see the Julia home page lists multiple dispatch as one of its benefits. Since my only real exposure to multiple dispatch was when I inherited some CLOS code where it was used to create a nightmare of spaghetti, I'm wondering if any Julia fans here would care to elaborate on how they've used multiple dispatch for Good™ instead of Evil™

reply

astrieanna 2 days ago

link

Multiple dispatch lets you make math operators work like they do in path. That means that you can use `+` the same way on ints, floats, matrices, and your own self-defined numeric type. If `x` is a variable of your new numeric type, OO languages make making `x + 5` work easy, but `5 + x` super hard. Multiple dispatch makes both cases (equally) easy. This was, as I understand it, the major reason that Julia uses multiple dispatch.

Multiple dispatch can make interfaces simpler: you can easily offer several "versions" of a function by changing which arguments they take, and you can define those functions where it makes sense, even if those places are spread across multiple modules or packages. Julia provides great tools (functions) that make methods discoverable, help you understand which method you're calling, and help you find the definition of methods.

Looking at some Julia code (the base library or major packages) might give you a better idea of how Julia uses multiple dispatch.

--

http://bytes.com/topic/python/answers/160900-isa-keyword

 'isa' keywordtalin at acm dot org P: n/a

talin at acm dot org Although I realize the perils of even suggesting polluting the Python namespace with a new keyword, I often think that it would be useful to consider defining an operator for testing whether or not an item is a member of a category.

Currently, we have the 'in' operator, which tests for membership within a container, and that works very well ...

I propose the word 'isa' because the term 'isa hierarchy' is commonly used to indicate a tree of types. So the syntax would look like this:

if bear isa mammal: if name isa str:

(I suppose it would look prettier to put a space between "is" and "a", but there are many obvious reasons why you don't want "a" to be a keyword!)

The "isa" operator would of course be overloadable, perhaps by an accessor functions called __isa__, which works similarly to __contains__. The potential uses for this are not limited to isinstance() sugar, however. For example:

if image isa gif: elif image isa jpeg: elif image isa png:

 P: n/a
	Rocco Moretti Terry Hancock wrote:
    On Thursday 01 September 2005 07:28 am, Fuzzyman wrote:
        What's the difference between this and ``isinstance`` ?
    I must confess that an "isa" operator sounds like it would
    have been slightly nicer syntax than the isinstance() built-in
    function. But not enough nicer to change, IMHO.

Especially conidering that checking parameters with "isinstance" is considered bad form with Python's duck typing.

    What's the difference between this and ``isinstance`` ?
    What's the difference between 'in' and 'has_key()"? 1) Its shorter and
    more readable, 2) it can be overridden to mean different things for
    different container types.
    What's wrong with:
    if image.isa(gif):
    elif image.isa(jpeg):
    elif image.isa(png):
    That forces the classification logic to be put into the instance,
    rather than in the category. With the "in" keyword, the "__contains__"
    function belongs to the container, not the contained item, which is as
    it should be, since an item can be in multiple containers.
    Especially conidering that checking parameters with "isinstance" is
    considered bad form with Python's duck typing.

---

consider 'static' super in the sense that super could cause a one-time inclusion of new methods and fields instead of causing a chain lookup whenever the object is accessed? probably not..

--

also, https://news.ycombinator.com/item?id=7842047 points out that Python's super is a little too powerful, making it too hard to implement:

carbon12 9 days ago

link

> What parts of Python 3 syntax are missing? Which parts of the library don't compile?

The only things that don't compile properly are certain uses of "super()". super() without arguments is a very strange beast that captures the first argument of the function (interpreting it as the self object), and needs to infer its class.

Other than that, all the Python scripts in the Python 3 standard library will compile.

reply

--

mb have multimethods but only commutative ones, e.g. for an operator like addition over ints or floats, define +(int, int), +(int, float), +(float, float)

?

nah

--

http://nice.sourceforge.net/visitor.html

--

mb have multimethods (and unattached functions in general), but then for encapsulating state (or rather, for impure operations), always use single-dispatch objects with methods like beautiful Io? (i mean, Io has single-dispatch objects, not that Io has both of these)

so addition could be a normal function, as could sprintf, but printf would be an object method

if you do this, though, then what about accesses to aliased reference variables which are 'within one of the current monads'? this should logically be done lazily, because accesses to something 'within one of the current monads' should not 'count' as aliased reference right? but in that case since there we may have (commutative) multiple monads, this doesn't jive with the 'one receiver' for non-referentially transparent things (although maybe it does, since each such thing's 'receiver' just means either that the thing itself is accessed only via a API/set of methods, or that accesses to it are syntactic sugar for calls to methods of its monad?).

--

" R does not prevent the declaration of ambiguous multi-methods. At each method call, R will attempt to find the best match between the classes of the parameters and the signatures of method bodies. Thus add(r, l) would be treated as the addition of two “Point” objects. The resolution algorithm differs from Clos’s and if more than one method is applicable, R will pick one and emit a warning. One unfortunate side effect of combining generic functions and lazy evaluation is that method dispatch forces promises to assess the class of each argument. Thus when S4 objects are used, evaluation of arguments becomes strict " -- http://r.cs.purdue.edu/pub/ecoop12.pdf

---

http://www.haskell.org/haskellwiki/Class_system_extension_proposal

---

as in the above proposal, for oot, need to ensure that default methods in a typeclass can be provided by any descendent of that typeclass (to be inherited by THEIR descendents), not just in the original typeclass defining that method, as in Haskell currently

---

so, what's the difference between a mixin and a superclass? i think i'll define it like this:

in oot, a superclass is the more general thing, and operates like a superclass in Python (well, at least so far; i still have to grok CLOS and see if i like that better)

a mixin is like a superclass, except that internal state added by the mixin is only visible to that mixin, not to other methods in the class or to other mixins. For instance, if mixin "M1" is added to class "A", and M1 has an internally visible field "_M1F1", then if methods of A outside M1 attempt to access A._M1F1, they will get nothing, and if they create it, it will be a different variable.

---