proj-oot-ootOopNotes1

http://journal.stuffwithstuff.com/2010/09/18/futureproofing-uniform-access-and-masquerades/

points out three kinds of futureproofing in java:

these are all annoying boilerplate, but they are all good things to do because if you don't, and you want to make one of the following changes, then you must change every call site, instead of just one line. This is especially bad if your program is a shipped library and the call sites are in client code (e.g. you would have to make breaking/incompatible changes to your library):

now, Oot already deals with the first two of these, and maybe the third. If not, We should probably deal with the third, too. That is to say, when you call a constructor (if we have constructors at all; i'm leaning towards yes), you don't actually determine an implementation but merely an interface, and perhaps a factory method to determine the implementation. In other words, 'everything is an interface', like we always say.

in http://journal.stuffwithstuff.com/2010/10/21/the-language-i-wish-go-was/, he also points out a few more that Go handles that Java doesn't:

in http://journal.stuffwithstuff.com/2010/10/21/the-language-i-wish-go-was/, he also points others kinds of futureproofing that aren't needed in Java but that may be needed in other languages, such as Go:

---

should we have constructors, copy constructors, move constructors? and autodefined repr, equals (equivalence (ad hoc polymorphic)), structequals (true structural equality), memequals (reference equality), hash?

---

could require traits to be stateless and distinguish 'classes' which hold state

---

could have traits each have their own namespace and by default any instance methods and variables defined are in that namespace (note this requires explicit instance variable definitions). Now variable references are first looked up in local scope, then in the trait-local instance scope, and only then in the instance scope. Could also allow traits to declare instance methods and variables in instance scope, but with an explict notation (perhaps just Python self.varName for instance-level variables). If you do that, now two traits can introduce the same name without clashing, unless they intend to clash, without any compile-time checking (except for the trait names themselves, and for method names), and so we have commutative traits. no wait, you still have to compile-time check to prohibit the diamond problem, unless you require all fully-qualified names for inherited stuff.

however this compile-time checking either prohibits or at least doesn't help with dynamic addition of methods, right? so maybe require assertions in this case? that sounds troublesome, how would you upgrade an implementation of an API that used static methods to one using dynamically-generated ones without making the client assert the existence of these? if you use methodMissing, you make the client change to capitalized method names.

note that as i've written this paragraph, it conflicts with the previous suggestions, e.g. as written, traits can declare new instance variables. But you could forbid that, on the idea that even though methods are like variables, method defns are static and so allowed. or could just unify stateful 'classes' and traits.

---

could allow implicit local variable definitions (including ones which become included in closures, e.g. it's okay because it's lexically scoped) but require explicit instance variable definitions (e.g. implicit is not okay here because it's dynamically scoped, e.g. harder to read)

---

classes as 'state domains' to allow tracking of permitted mutable state/non-referential transparency within various functions

or perhaps this should be module-level?!? (modules as state domains, or modules as the domains with which a set of permitted mutable state domains are associated?) i doubt it..

---

" OOP Isn't a Fundamental Particle of Computing The biggest change in programming over the last twenty-five years is that today you manipulate a set of useful, flexible data types, and twenty-five years ago you spent a disproportionately high amount of time building those data types yourself.

C and Pascal--the standard languages of the time--provided a handful of machine-oriented types: numbers, pointers, arrays, the illusion of strings, and a way of tying multiple values together into a record or structure. The emphasis was on using these rudiments as stepping stones to engineer more interesting types, such as stacks, trees, linked lists, hash tables, and resizable arrays.

In Perl or Python or Erlang, I don't think about this stuff. I use lists and strings and arrays with no concern about how many elements they contain or where the memory comes from. For almost everything else I use dictionaries, again no time spent worrying about size or details such as how hash collisions are handled.

I still need new data types, but it's more a repurposing of what's already there than crafting a custom solution. A vector of arbitrary dimension is an array. An RGB color is a three-element tuple. A polynomial is either a tuple (where each value is the coefficient and the index is the degree) or a list of {Coefficient, Degree} tuples. It's surprising how arrays, tuples, lists, and dictionaries have eliminated much of the heavy lifting from the data structure courses I took in college. The focus when implementing a balanced binary tree is on how balanced binary trees work and not about suffering through a tangled web of pointer manipulation.

Thinking about how to arrange ready-made building blocks into something new is a more radical change than it may first appear. How those building blocks themselves come into existence is no longer the primary concern. In many programming courses and tutorials, everything is going along just fine when there's a sudden speed bump of vocabulary: objects and constructors and abstract base classes and private methods. Then in the next assignment the simple three-element tuple representing an RGB color is replaced by a class with getters and setters and multiple constructors and--most critically--a lot more code.

This is where someone desperately needs to step in and explain why this is a bad idea and the death of fun, but it rarely happens.

It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the fundamental particle of computing that some people want it to be. When blindly applied to problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet there's often an aesthetic insistence on objects for everything all the way down. That's too bad, because it makes it harder to identify the cases where an object-oriented style truly results in an overall simplicity and ease of understanding.

(Consider this Part 2 of Don't Distract New Programmers with OOP.) "

---

foundart 3 days ago

link

I see the Julia home page lists multiple dispatch as one of its benefits. Since my only real exposure to multiple dispatch was when I inherited some CLOS code where it was used to create a nightmare of spaghetti, I'm wondering if any Julia fans here would care to elaborate on how they've used multiple dispatch for Good™ instead of Evil™

reply

astrieanna 2 days ago

link

Multiple dispatch lets you make math operators work like they do in path. That means that you can use `+` the same way on ints, floats, matrices, and your own self-defined numeric type. If `x` is a variable of your new numeric type, OO languages make making `x + 5` work easy, but `5 + x` super hard. Multiple dispatch makes both cases (equally) easy. This was, as I understand it, the major reason that Julia uses multiple dispatch.

Multiple dispatch can make interfaces simpler: you can easily offer several "versions" of a function by changing which arguments they take, and you can define those functions where it makes sense, even if those places are spread across multiple modules or packages. Julia provides great tools (functions) that make methods discoverable, help you understand which method you're calling, and help you find the definition of methods.

Looking at some Julia code (the base library or major packages) might give you a better idea of how Julia uses multiple dispatch.

--

http://bytes.com/topic/python/answers/160900-isa-keyword

 'isa' keywordtalin at acm dot org P: n/a

talin at acm dot org Although I realize the perils of even suggesting polluting the Python namespace with a new keyword, I often think that it would be useful to consider defining an operator for testing whether or not an item is a member of a category.

Currently, we have the 'in' operator, which tests for membership within a container, and that works very well ...

I propose the word 'isa' because the term 'isa hierarchy' is commonly used to indicate a tree of types. So the syntax would look like this:

if bear isa mammal: if name isa str:

(I suppose it would look prettier to put a space between "is" and "a", but there are many obvious reasons why you don't want "a" to be a keyword!)

The "isa" operator would of course be overloadable, perhaps by an accessor functions called __isa__, which works similarly to __contains__. The potential uses for this are not limited to isinstance() sugar, however. For example:

if image isa gif: elif image isa jpeg: elif image isa png:

 P: n/a
	Rocco Moretti Terry Hancock wrote:
    On Thursday 01 September 2005 07:28 am, Fuzzyman wrote:
        What's the difference between this and ``isinstance`` ?
    I must confess that an "isa" operator sounds like it would
    have been slightly nicer syntax than the isinstance() built-in
    function. But not enough nicer to change, IMHO.

Especially conidering that checking parameters with "isinstance" is considered bad form with Python's duck typing.

    What's the difference between this and ``isinstance`` ?
    What's the difference between 'in' and 'has_key()"? 1) Its shorter and
    more readable, 2) it can be overridden to mean different things for
    different container types.
    What's wrong with:
    if image.isa(gif):
    elif image.isa(jpeg):
    elif image.isa(png):
    That forces the classification logic to be put into the instance,
    rather than in the category. With the "in" keyword, the "__contains__"
    function belongs to the container, not the contained item, which is as
    it should be, since an item can be in multiple containers.
    Especially conidering that checking parameters with "isinstance" is
    considered bad form with Python's duck typing.

---

consider 'static' super in the sense that super could cause a one-time inclusion of new methods and fields instead of causing a chain lookup whenever the object is accessed? probably not..

--

also, https://news.ycombinator.com/item?id=7842047 points out that Python's super is a little too powerful, making it too hard to implement:

carbon12 9 days ago

link

> What parts of Python 3 syntax are missing? Which parts of the library don't compile?

The only things that don't compile properly are certain uses of "super()". super() without arguments is a very strange beast that captures the first argument of the function (interpreting it as the self object), and needs to infer its class.

Other than that, all the Python scripts in the Python 3 standard library will compile.

reply

--

mb have multimethods but only commutative ones, e.g. for an operator like addition over ints or floats, define +(int, int), +(int, float), +(float, float)

?

nah

--

http://nice.sourceforge.net/visitor.html

--

mb have multimethods (and unattached functions in general), but then for encapsulating state (or rather, for impure operations), always use single-dispatch objects with methods like beautiful Io? (i mean, Io has single-dispatch objects, not that Io has both of these)

so addition could be a normal function, as could sprintf, but printf would be an object method

if you do this, though, then what about accesses to aliased reference variables which are 'within one of the current monads'? this should logically be done lazily, because accesses to something 'within one of the current monads' should not 'count' as aliased reference right? but in that case since there we may have (commutative) multiple monads, this doesn't jive with the 'one receiver' for non-referentially transparent things (although maybe it does, since each such thing's 'receiver' just means either that the thing itself is accessed only via a API/set of methods, or that accesses to it are syntactic sugar for calls to methods of its monad?).

--

" R does not prevent the declaration of ambiguous multi-methods. At each method call, R will attempt to find the best match between the classes of the parameters and the signatures of method bodies. Thus add(r, l) would be treated as the addition of two “Point” objects. The resolution algorithm differs from Clos’s and if more than one method is applicable, R will pick one and emit a warning. One unfortunate side effect of combining generic functions and lazy evaluation is that method dispatch forces promises to assess the class of each argument. Thus when S4 objects are used, evaluation of arguments becomes strict " -- http://r.cs.purdue.edu/pub/ecoop12.pdf

---

http://www.haskell.org/haskellwiki/Class_system_extension_proposal

---

as in the above proposal, for oot, need to ensure that default methods in a typeclass can be provided by any descendent of that typeclass (to be inherited by THEIR descendents), not just in the original typeclass defining that method, as in Haskell currently

---

so, what's the difference between a mixin and a superclass? i think i'll define it like this:

in oot, a superclass is the more general thing, and operates like a superclass in Python (well, at least so far; i still have to grok CLOS and see if i like that better)

a mixin is like a superclass, except that internal state added by the mixin is only visible to that mixin, not to other methods in the class or to other mixins. For instance, if mixin "M1" is added to class "A", and M1 has an internally visible field "_M1F1", then if methods of A outside M1 attempt to access A._M1F1, they will get nothing, and if they create it, it will be a different variable.

---

the comments i just wrote in [proj-oot-ootLevelsNotes1] regarding how Oot should have both 'or' and muKanren 'disj' just be 'or' and use ad-hoc polymorphism to disambiguate led me to speculate once again on the different, dissociable aspects that we associate as part of 'oop':

so in haskell we get the encapsulation via typeclasses; this should be enough to treat both 'disj' and 'or' as 'or'. Clojure's multimethods should support this, too.

---

i guess i like the idea of using interfaces to encapsulate the representation of data structures, but not for everything else; all of the various methods and utilities that are provided by a module to operate on a new data structure shouldn't be object methods, they should be ordinary functions. So, many data structury Oot modules should provide a small interface, one or more implementations of an interface, and many top-level functions that take one or more instances of that interface as arguments.

now what about the problem in Python where sometimes you have something like operation1(list_var) and other times you have something like list_var.operation2(), which makes it awkward to chain/pipeline together operation1 and operation2? i think this is a syntactic problem and we should deal with it syntactically. perhaps with x.f1.f2 = f2(f1(x)), or perhaps x

f1f2

---

" Btw, if you ask a C++ developer to describe OOP, he'll say it's the ability of having structs with a VTable attached. Most people think OOP is about state. That's not true, OOP is about single (or multi) dispatch and subtyping (having a vtable), hence it's not at odds with FP, unless you want it to be ;-) "

---

"

alankay1 11 hours ago

Object oriented to me has always been about encapsulation, sending messages, and late-binding. ...

reply "

---

a comment by Rimantas Liubertas on OOP methodology is interesting:

https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53#.imao8qpbf

he suggests:

the first one is too vague for me and i dont agree with the second one (but that's just semantics anyways). The third and fourth are interesting, though.

---

notes on https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53#.o32umqr5f

the three pillars of OOP are:

the (purported) goal is code reuse.

problems with inheritance:

Banana Monkey Jungle Problem: in practice there tend to be so many inter-class dependencies that end up having to import a ton of different classes just be able to reuse one class. Composition works better (my note: i don't see how that would help?)

diamond inheritance problem: actually he just describes a simpler multiple inheritance problem (namely if two parent classes conflict, which one to choose?). One solution is never to use multiple inheritance and to use composition instead.

Fragile Base Class Problem: subclasses are coupled to the base class. Example is given in which a base class changes in a way which doesn't break its external contract, yet still breaks subclasses. One solution is composition.

The example is:

initial base class:

List;
 
public class Array
{
  private ArrayList<Object> a = new ArrayList<Object>();
 
  public void add(Object element)
  {
    a.add(element);
  }
 
  public void addAll(Object elements[])
  {
    for (int i = 0; i < elements.length; ++i)
      a.add(elements[i]); // this line is going to be changed
  }
}

subclass:

public class ArrayCount extends Array
{
  private int count = 0;
 
  @Override
  public void add(Object element)
  {
    super.add(element);
    ++count;
  }
 
  @Override
  public void addAll(Object elements[])
  {
    super.addAll(elements);
    count += elements.length;
  }
}

change in base class:

  public void addAll(Object elements[])
  {
    for (int i = 0; i < elements.length; ++i)
      add(elements[i]); // this line was changed
  }

now the subclass doublecounts during addAll; previously the base class directly called ArrayList?.add, but now it goes thru its own add method, which has been overridden by the subclass to add a count.

The Hierarchy Problem: eg if you have a company with documents, do you have a Company class with a Documents class in it, or a Documents class with a Company class in it? One solution is to use tags for stuff like this, and only use hierarchies in cases of exclusive containment (eg socks are exclusively contained in your dresser; files on a hard drive are exclusively contained in their directories).

Encapsulation:

Problem with encapsulation: if an object holds a reference to another object, and that reference is aliased elsewhere, then people elsewhere can affect the internal state of this object without it knowing. The solution is value types (or, cloning everything); but the author notes that some things (operating system resources involved with external impure operations) cannot be cloned.

Imo this is a problem, but not really with 'encapsulation'; encapsulation still fulfills its promise of allowing other objects not to worry about the internal representation of this object. Also, just because there is a problem with impurity doesn't mean that encapsulation can't be used in those cases where everything is pure.

Polymorphism:

The author likes polymorphism but says that it doesn't require OOP, so therefore it's not an argument for OOP.

---

" The real problem with OOP is not that it forces you to do things a particular way, it's that what's new about it (privileging the first argument of each procedure, inheritance, lots of hidden mutable state) is bad, and what's good about it (encapsulation, polymorphism) is not new. "

-- https://news.ycombinator.com/item?id=12155019

" > what's new about it (privileging the first argument of each procedure

Multimethods, CLOS, Dylan? "

---

" Things OO languages 'force' us seem to be relatively easy to use: sending a message to a certain object, thinking in categories (elephants, houses, chairs, moveable objects, ...), state (position, speed, size, shape),... Thus it is not surprising that the overwhelming majority of software written in the last 10 years has been written in OO languages. "

---

" 6. Interfaces achieve polymorphism without inheritance.

Interfaces long for inheritance-like features. For example, see Java 8's introduction of default methods, or the boilerplate involved in implemetning certain Haskell typeclasses. "

---

threatofrain 18 hours ago [-]

I think that inheritance and composition are not the interesting aspects of object oriented design, or of any language or paradigm. These are tools for code brevity / reuse / elegance. Advanced copying or ctrl-c.

I would also argue that too much elegance in any language or design paradigm faces the same problem, in that elegance is often (but not completely) in tension with modularity or granularity of control, which is an engineering subgoal, because a common engineering situation is to experience system change (feature growth or reorganization of code), and improvements in granularity or modularity makes system restructuring easier.

I think that object oriented design is more essentially about modelling distributed state, because you probably have multiple objects with their own internal state. I believe this means that object oriented design is highly concerned with protocolized communication and synchronization between distributed states, whether via messages or channels or something else.

I believe that in distributed situations, object oriented design can be very harmonious with functional reactive programming strategy. You can easily and usefully have a situation where an object functionally updates its internal state with a typed stream of inputs.

reply

---

i probably said this before, but rather than resolving inheritance conflicts implicitly by having the language specif an implicit ancestor precedence ordering for multiple inheritance, i'd rather force the implementor of a subclass to explicitly resolve these conflicts, and to present a compile error until they do so.

For example, if class D inherits from B and C, and both B and C have an attribute/field/method "f1", and the subclass didn't override f1, then this is a compiler error until the implementor resolves it.

Resolutions could be:

however, in the clean case of the 'diamond problem', where B and C both inherited f1 from A, and didn't override it, perhaps this would not be a compiler error(?)

---

" Sometimes you just really need dynamic typing. One of Fantom's pivotal features is the ability to call a method using strong or dynamic typing. If you call a method via the "." operator, the call is type checked by the compiler and compiled into an efficient opcode. But you can also use the "->" operator to call a method dynamically. This operator skips type checking, and can be used to implement duck typing. The "->" operator actually routes to the Obj.trap method which can be overridden to build all sorts of nifty dynamic designs. "

---

" There are plenty of good reasons why Java and C# ended up using the class/interface model. Multiple inheritance offers lots of power, but comes at the expense of complexity and some pretty nasty pitfalls. Fantom takes a middle of the road approach called mixins. Mixins are essentially Java or C# interfaces that can have method implementations. To avoid some of the pitfalls of true multiple inheritance, mixins restrict features such as fields which store state. Mixins are a very nice feature in the Fantom toolbox when it comes to designing your object oriented models. "

---

i dont understand this and i'm not sure that i need to, just putting this here for future reference:

"

cynicalkane 626 days ago [-]

OO--in Haskell, at least--is most succinctly represented as existential and higher-order types with constraints, which break some important assumptions used for proving things in FP. If you are message passing s.t. the call site is opaque to the caller, this breaks more assumptions. The assumptions in question happen to be important assumptions for the currently cool and trendy research in FP, which is important when you're an academic with a career.

Furthermore, the types required are a bit more general than OO, so once you've introduced the former it doesn't make sense to constrain your landscape of thought to the latter.

tel 626 days ago [-]

I'm not sure it really breaks assumptions: it just requires coinduction and bisimulation instead of induction and equality. Coinduction and Bisimulation aren't as well understood today and are harder to use, so it's a bit of a rough project to move forward with.

What assumptions are you referring to? "

---

wyager 626 days ago [-] ... If you mean mutable + stateful programming with objects as an abstraction mechanism (which is what people usually mean by "OO"), then yes, it is in some ways opposite to pure FP.

OO to FP is kind of like Turing Machines to the Lambda Calculus. ...

---

" Problem 3: CLOS

Note: this section contains a few factual errors pointed out by Pascal Costanza in a comment below. Some of his corrections are also, of course, opinions, and I've commented on them later in the thread. In any case, while I thank Pascal for his corrections, the errors I've made are utterly irrelevant to my conclusions.

CLOS is icky. I haven't worked with Smalltalk a whole lot, but I've worked with it enough to know that to do OOP right, you have to do it from the ground up. CLOS was bolted on to Common Lisp. Everyone knows it, although not many people want to admit it.

...CLOS has problems. One obvious one is that length isn't a polymorphic function....

Another problem is the slot accessor macros. They're insanely clever, but clever isn't what you want. You want first-class function access, so you can pass the getters and setters to map, find-if, etc. You can work around these things, but they're a leaky abstraction, and enough of those will add up to significant mental resistance to getting things done. It's like all those weird little rules in Perl: non-orthogonal rules that add up to forgetting the language every time you leave it alone for more than a week.

What you really want in lieu of CLOS is... complicated. It's a hard problem. Lisp wants to be constructed entirely from macros.((but))...Having the object system — something pretty fundamental to the language, you'd think — written as a bunch of macros doesn't feel right when all is said and done.

When you work with Ruby or Smalltalk or any suitably "pure" OO language (Python doesn't quite count, unfortunately; its bolts are also showing), you realize there are some distinct advantages to having everything be an object. It's very nice, for instance, to be able to figure out what methods are applicable to a given class (e.g. "foo".methods.sort.grep(/!/) from Ruby), and to be able to extend that list with your own new methods. It's a nice organizational technique.

Of course, that forces you into a single-dispatch model, so it becomes harder to figure out what to do about multi-methods. Some Python folks have implemented multi-methods for Python, and they do it by making them top-level functions, which makes sense (where else would you put them?) I'm not claiming that Smalltalk's object model is going to translate straight to Lisp; you have to decide whether cons cells are "objects", for instance, and that's a decision I wouldn't wish on my worst enemy. I don't envy the person who tackles it.

....But changing CLOS to be simpler and more seamless essentially means replacing it. And replacing it is probably best done inside the implementation. ... Or maybe you could go the Haskell route and not have OOP at all. That seems to alienate most programmers, though, despite the attractions of not having to create nouns for everything. (Have you ever noticed that turning a non-object-oriented program into an object-oriented one in the same language that does the same thing essentially doubles its size? Try it sometime...) At the risk of predicting future fashion trends, which is rarely a good idea, I'll venture that objects are going to continue to be trendy for at least a few more decades. So I think Lisp needs some form of "seamless" OOP. " -- [2]

(note: excerpt from Costanza's response in comment was:

" ...

(specializer-methods (class-of object))

This works for any object because all objects have a class in Common Lisp, without any exception (including cons cells).

)

---

more problems with Oop (from a post about good things about Clojure):

" 3) Data first programming

Walking away from object-oriented languages is very freeing.

I want to design a model for the game of poker. I start by listing the nouns3: “card”, “deck”, “hand”, “player”, “dealer”, etc. Then I think of the verbs, “deal”, “bet”, “fold”, etc.

3: For the record, I know that this isn’t the “right” way to design OO programs, but the fact that I have to acknowledge this proves my point. ↩

Now what? Here’s a typical StackOverflow? question demonstrating the confusion that comes with designing like this. Is the dealer a kind of player or a separate class? If players have hands of cards, how does the deck keep track of what cards are left?

At the end of the day, the work of programming a poker game is codifying all of the actual rules of the game, and these will end up in a Game singleton that does most of the work anyway.

If you start by thinking about data and the functions that operate on it, there’s a natural way to solve hard problems from the top-down, which lets you quickly iterate your design (see below). You have some data structure that represents the game state, a structure representing possible actions a player can take, and a function to transform a game state and an action into the next game state. That function encodes the actual rules of poker (defined in lots of other, smaller functions). "

--- [3]

---

from Python:

PEP 487: Simpler customization of class creation

It is now possible to customize subclass creation without using a metaclass. The new __init_subclass__ classmethod will be called on the base class whenever a new subclass is created:

class PluginBase?: subclasses = []

    def __init_subclass__(cls, **kwargs):
        super().__init_subclass__(**kwargs)
        cls.subclasses.append(cls)

class Plugin1(PluginBase?): pass

class Plugin2(PluginBase?): pass

---

PEP 487: Descriptor Protocol Enhancements

PEP 487 extends the descriptor protocol has to include the new optional __set_name__() method. Whenever a new class is defined, the new method will be called on all descriptors included in the definition, providing them with a reference to the class being defined and the name given to the descriptor within the class namespace. In other words, instances of descriptors can now know the attribute name of the descriptor in the owner class:

class IntField?: def __get__(self, instance, owner): return instance.__dict__[self.name]

    def __set__(self, instance, value):
        if not isinstance(value, int):
            raise ValueError(f'expecting integer in {self.name}')
        instance.__dict__[self.name] = value
  1. this is the new initializer: def __set_name__(self, owner, name): self.name = name

class Model: int_field = IntField?()

---

someone's opinion about OOP:

" Those features which are potentially good (data hiding, contract enforcement, polymorphism) are not unique to OOP and, in fact, stronger versions of these things are available in non-OOP languages. Those features that are unique to OOP (dependency injection, instantiation) are awful and exist only because OOP is awful. "

---

" what do they view as the core ideas of OOP? CodeBetter? focuses on this list of features:

Encapsulation (((they also mention getter and setter properties here))) ...

Abstraction (((the description is confusing but mb they mean having base classes? they say "As development progresses, programmers know the functionality they can expect from as yet undeveloped subsystems. "))) ...

Inheritance Objects can relate to each other with either a “has a”, “uses a” or an “is a” relationship. ...

Polymorphism ... "

in a Chair example: " it is only necessary to figure out what your application needs to know about and do with chairs "

..."Structs (structures), records, tables, and other ways of organizing related information predated object oriented programming."

"...The most over-used and rather worthless discussion on inheritance that you will see revolves around the “Is-A vs Has-A” discussion. For example, a car is-a vehicle but has-a steering wheel. The idea these authors are chasing is that your car class should inherit your vehicle class and have a steering wheel as a member...The problem here is that inheritance is mixing together several things: you inherit “typeness”, interface, and implementation all at the same time. However, all of the examples focus on interface while talking about “typeness”. The abstract code doesn’t care that a car “is-a” vehicle, just that the objects respond to a certain set of functions, or interface. In fact, if you want to give your chair class accelerate(), brake(), turn_left() and turn_right() methods, shouldn’t the abstract code be able to work on chairs then? Well, of course, but that doesn’t make a chair a vehicle."

" In Clojure, inheritance is simple:

(derive ::rect ::shape) nil

(derive ::circle ::shape) nil

(isa? ::circle ::shape) true

(isa? ::rect ::shape) true

Here, I get to define my data-type hierarchy independently of my functions and independently of any state. We do not need OOP to have inheritance. "

"

Also, there are the SOLID principles, which describe good architectural ideas for software, and which assume that OOP programming is the best way to implement them:

    Single responsibility principle
    a class should have only a single responsibility (i.e. only one potential change in the software’s specification should be able to affect the specification of the class)
    Open/closed principle
    “software entities … should be open for extension, but closed for modification.”
    Liskov substitution principle
    “objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” See also design by contract.
    Interface segregation principle
    “many client-specific interfaces are better than one general-purpose interface.”[8]
    Dependency inversion principle
    one should “Depend upon Abstractions. Do not depend upon concretions.”[8]
    Dependency injection is one method of following this principle.
    "

" All the same, when I go to a job interview, the things I get asked about the most often are encapsulation and polymorphic dispatch and inheritance, so simply by having those 3 on the list, I think I am covering the core ideas that people associate with OOP. "

" It is tragic that OOP has had to retreat from such a powerful idea ((inheritance)), to the extent that composition is now the favored strategy: ... Back in the 1960s, OOP introduced the concept of inheritance, and yet now we know OOP is a terrible way to implement inheritance. And yet, we often want inheritance, not composition.

Inheritance is good. Nearly all data-types belong to a hierarchy, and my code should be able to represent that hierarchy without fear. If I write software for a store, I will probably have a “sell price” and a “buy price”, both of which are children of “price” which is a child of “transaction metric” which is a child of “decimal” (or “number”). I don’t want to model this with composition, nor do I want to worry about brittle base classes and tight-coupling between the interface and the implementation. To escape from these worries, I want my data-types declared in a hierarchy that is all-together separate from my code’s behavior (the functions that I write). Functional languages like such as Shen, Haskell or Clojure allow for data-type definitions that are separate from the behavior of my code. Java does not. Ruby does not. Python does not. "

((i think what e wants is interface/typeclass inheritance))

" 1.) Those items which are potentially good (data hiding, contract enforcement, polymorphism) are not unique to OOP and, in fact, stronger versions of these things are available in non-OOP languages.

2.) Those items that are unique to OOP (dependency injection, constructor values) are awful and exist only because OOP is awful. " --- [4]

"Concurrency Oriented Programming ((eg Erlang)) also provides the two major advantages commonly associated with object-oriented programming. These are polymorphism and the use of defined protocols having the same message passing interface between instances of different process types." -- [5]

[6] wants to keep the following things separate:

and wants to keep 'data-type hierarchy' separate from 'behavior' -- that is, he doesn't like how in many OOP languages, subclassing of types is tied up with inheritance of implementations. He points out (above), that clojure has constructs (derive, isa) that do data type subclassing without method implementation inheritance.

I think what e means about keeping 'enforcing a contract' separate from 'mutating state' is that he wants behaviors (which is what obeys contracts, i guess) to be functions, not methods, and he wants mutable state to be stored separately in a small number of global variables.

" I recall when I first discovered Ruby On Rails, back in 2005. It had tools that would auto-generate my model classes for me, which I thought was very cool. More so, the model classes were simple. If I had database tables for “users”, “products”, “sales” and “purchases”, the tools would probably generate 4 files for me, that would look like this:

class User end

class Product end

class Sale end

class Purchase end

At the time, I thought, this is fantastic! I didn’t have to write getters and setters, that was implied! My limited experience with Java had left me with a bad taste in my mouth — Java was verbose! I had to write every function to get and set a variable. How tedious! (Even PHP, a script language, forced me to write getters and setters, just like Java.) Ruby saved us from all that. But the present constantly changes the meaning of the past. The years go by and we see things differently. I spent 2 years away from Ruby, working with Clojure, and then I came back to Ruby and I was astonished to realize that it now seemed high-ceremony to me. Look at those 4 empty classes! That’s 4 files, and 8 lines of code, that are mere boilerplate — they don’t help me with my task, I just have to write it so that when method_missing is called it will have some context to work with. What seemed low-ceremony to me, in 2005, strikes me as high-ceremony in 2014. "

"But they can never get there, since the language forces them to declare data-types, behavior and state all in one spot. "

---

should support multiple constructors (polymorphism by argument type, in general)

---

i've been thinking of objects in Oot as "packages of state", eg mutable structs accessed via reference, but i guess this is actually orthogonal from "a data representation encapsulated with the methods needed to access it". You can have encapsulation of data representation even if the struct is immutable and passed by value. In fact, the same data representation and methods can be used to operate on a mutable struct in-place, or on immutable data. And in Oot, we want immutable to be the default.

Although i disagree with much of it, [7] points out that "state is everywhere in an OOP program...Ask a Java programmer how many non-constant (mutable) variables exist in their current project and they would have to do some careful research to find out the answer. As a point of contrast, a typical Clojure app centralizes all state in a few global vars, which are then worked upon by all the functions in the program — the idea of “few nouns, lots of verbs”...". We need to make it easy for programmers to see where in code mutable state exists. Previously, i suggested that the sigil '&' prefix any variable that is (non-locally) mutable (where by 'locally mutable' i mean things like "x = 3; x = 5; print(x)" that can be easily compiled to "x1 = 3; x2 = 5; print(x2)"). Perhaps:

---

alkonaut 842 days ago [-]

What OO brought to The table (and what many advocates of FP are forgetting) is context sensitivity, that is, functions/methods are brought in scope by the owning object rather. Typing dog.bark() is a thousand times more powerful in terms of tooling than bark(dog) or SomeDogPackage::bark(dog). I prefer the FP style of coding but I can't see average devs giving up on Java/C# style tools.

Sure OO brought a lot of little code smells too, and we often end up making bad code, but it's not like we have to. Polyglot programmers that use e.g. Java+Scala or C#+F# probably make their OO code much better than others. I have almost completely stopped using mutable objects, long inheritance chains and nulls in C#, as an influence from F#.

For some scenarios having mutable objects is a near perfect fit (scene graphs for games or UI:s are good examples of code where both FP and non-object imperative usually looks worse).

---

" I embarked on a bold experiment: give up OO code completely. Immediately I ran into a situation where I needed polymorphism, and resorted to strewing code with switch statements. I decided to quit that, and just declare some classes -- but only to handle this one instance where polymorphism was needed. Then a funny thing happened: or didn't happen, as it turned out. Nothing else happened. It's 2014, and nothing else has happened. I've never encountered a situation where I needed inheritance. I've never encountered a situation where I needed encapsulation. And since I thought an "object oriented language", by definition, was a language that supported inheritance, encapsulation, and polymorphism... well, I only needed one out of the three. ... ...functions and data structures belong in totally different worlds" Yes, exactly. This is another thing I discovered in 2004 when I quit (well, tried to quit -- came back for polymorphism) OO programming -- I went back and read a bunch of stuff from the original creators of OO like Alan Kay, and found they were doing everything before E.F. Codd and the invention of the relational data model, and they had ideas about data structures and ideas about code execution all mixed together. That's why an objects is a set of data fields and a set of functions (called methods) bound together into a single entity, while at the same time you have notions like "inheritance" that try to represent relationships between data. It's much better to separate your code from your data, represent your data relationships explicitly (using the relational model or memory pointers or whatever you want to use), and represent your code and your computation processes explicitly without implicit bindings to data structures. The resulting code is super-clear ... If the relationships between different data structures in your program are isomorphic to the relationships the things those data structures represent in the minds of your users in the real world are the same, you can always say "yes!" when a customer asks you if you can implement a new feature, even if it's something you never thought of when you originally wrote the program. "

---

summary of http://harmful.cat-v.org/software/OO_programming/why_oo_sucks

---

" "A closure is an object that supports exactly one method: "apply".

... But from another perspective, the apply "method" of a closure can be used as a low-level method dispatch mechanism, so closures can be, and are, used to implement very effective objects with multiple methods. Oleg Kiselyov has a short article on the subject: http://okmij.org/ftp/Scheme/oop-in-fp.txt

Used in this way, closures can be said to be richer than objects because they can support many more capabilities than just a single language-provided method dispatch mechanism. ... combine Norman Adams (alleged source of "objects are a poor man's closures") and Christian Queinnec ("closures are a poor man's objects") ... "

-- [8]

white-flame 9 hours ago [-]

Multiple closures can overlap in their referencing of some set of variables. Objects can't really do this, so they're not that similar.

Only the most simplistic application of closures (one unique set of instantiated variable slots per closure) matches object encapsulation.

reply

metaphorm 3 hours ago [-]

> Objects can't really do this, so they're not that similar.

they can't? news to me. how does namespacing work, in say Python, then?

reply

dllthomas 3 hours ago [-]

Objects containing references can share referents.

reply

---

derefr 15 hours ago [-]

My definition of an object is "an ADT that gets to 'decide', using private runtime state, what to do in response to any/all attempts to interact with it."

Which makes for a very simple test: if something is an object, you should be able to send it a message that makes it change its mind about what messages it accepts after that.

There's no obvious way to implement a behavior like this using the native "objects" in nominally-object-oriented languages like C++ or Java. (But you can do so with the closure types from these languages. Or you can look higher on the stack: threads with IPC queues, or POSIX processes, are both objects.)

But implementing this behavior is obvious in Ruby; this is how basic things like Object#extend work.

And implementing this behavior is also obvious, oddly enough, in Erlang: http://joearms.github.io/2013/11/21/My-favorite-erlang-progr...

In conclusion, Erlang is more object-oriented than Java. ;)

...

noblethrasher 15 hours ago [-]

...

I define an object as any component with which you interact strictly by way of negotiation (as opposed to command and control), and I finally think I understand why Alan Kay says that the most important thing in OOP is the message passing.

unscaled 12 hours ago [-]

To be entirely accurate, you're wrong when you say your definition of OOP is the original one. SIMULA-67 predates Smalltalk and its object system is much closer to Java than it is to Smalltalk, although it helped inspire both.

To be more fair, there are two very different schools which interpreted and continued to developed Simula in two radically divergent ways.

One of the schools is focusing on clear division between rigid, statically defined classes that never change (but may extend other classes) and runtime instances, which can contain only state, not behavior. This school does everything exhaustively codify the interfaces exposed by the classes: ensure that a predefined set of messages with a predefined set of parameters for each message (a.k.a methods) is permitted, control access to these methods and in the most extreme languages (e.g. Eiffel) even verify that object state and parameters conform to a predefined contract. This school is obviously not opposed to modifying behavior patterns based on runtime state, but it believes that the object system should not directly support that, and encourages programmers to implement their own mechanism on top of rigid-class object systems: this is what design patterns are usually meant to achieve.

The other school believes that an object system's first priority is giving objects absolute autonomy in parsing the messages they receive and as a result usually ends up with object systems that focus on powerful runtime extension and arbitrary message parsing functionality instead of compile-time strictness.

We could trace them back to their origins and call them the C++ school and the Smalltalk school. We could pin them to their modern champions and call them the Java school and the Ruby school. We could follow their poster boys and call them the class school and the message school. We could go on with type of languages they tend to thrive in and call them the dynamic school and the static school.

...

 naasking 14 hours ago [-]

A definition of objects that rules out immutable objects doesn't seem viable to me.

Furthermore, your definition merely requires mutability, which is a feature of Java and C++ objects, so I'm not sure what you think is lacking. Your definition reduces to "objects undergo state transitions".

reply

seanmcdirmid 13 hours ago [-]

Objects require at least an identity. Immutable data structures necessarily don't have identity (well, to say identity isn't meaningful because aliases can simply be seen as copies). There is more to it than identity, and you can always use a mutable variable or rebound parameter to recreate an identity for an immutable object (I.e. the world as an input and output).

reply

naasking 5 hours ago [-]

I disagree. Encapsulation is the only universal property of object oriented abstraction (see Will Cook's papers on OO vs ADTs).

Identity is merely a property of some objects, not all objects.

reply

seanmcdirmid 4 hours ago [-]

I disagree with that and that particular paper. We must think of objects from a design perspective, not a technical one, and the one thing that distinguishes objects is their ability to be "things" rather than just constructs.

42 is not an object because the value can reproduced easily. John Smith, on the other hand, is not just a value, but has a name and an identity, is not exactly the same person everyday, and so on. 42 can be used in equational reasoning (it never lies) while John Smith cannot. John Smith, on the other hand, can be attributed with semi-autonomous behavior while such notion isn't meaningful for 42.

reply

DougBTX? 3 hours ago [-]

It sounds like you're using "object" and "not an object" in the same way I would use "entity" and "value". I think this is Java terminology that gets used in C# sometimes too, I'm mostly thinking about NHibernate.

...

---

While I've seen various methods in scheme to emulate objects with closures, I haven't seen a good one from the statically typed FP camp (the ML family, Haskell, etc). Meanwhile, objects are able to masquerade as closures very nicely in statically typed OO object systems like Scala and C#.

So I lean more towards closure being a poor mans object than vice versa.

reply

tel 7 hours ago [-]

OCaml, Objective Caml, has a built-in Object system. Objects are immutable which prevents them from carrying identity (though mutation can be opted into to recover this). They're also "just" row-typed records. Classes are a form of constructor function which have inheritance for code sharing purposes. This isn't emulating objects with lambdas, but it is a well-articulated idea of what objects in a pure-ish FP setting might look like.

There have been many attempts to bring various forms of "object" into Haskell. It tends to be either a very simple or a very messy affair. On the messy side, strict purity blows up object identity again and there are only very messy ways to recover it. This makes most OO such a pain that you're more likely to factorize it into smaller pure components which appear more functional.

On the simple side, corecursive patterns are distinctly "object like" and are used all the time. Laziness gives you infinite data structures which, in effect, give you a limited form of mutability which can be incredibly helpful while still be easy to reason about.

---

Finally, I think that practically there is a strong case to be made for introducing actor-like objects into Haskell. It has a great runtime and exception system to support it, and it would allow a bit more structuring "at the highest level" of application architecture. Today these are often managed as imperative programs (built an IO chunk, execute it). This tends to work, but Erlang did it better.

reply

kaosjester 2 hours ago [-]

Perhaps unsurprisingly, Oleg rebuilt the linked implementation in Haskell [0]. It works exactly as you'd expect, right out of the box.

https://arxiv.org/abs/cs/0509027

---

jfoutz 16 hours ago [-]

I enjoyed building object systems in scheme. The closure way is kind of like focusing on the code without worrying to much about the underlying data structure. The object way is worrying about the data structure and the code is secondary.

reply

Tloewald 15 hours ago [-]

I think in effect you're saying that they're duals.

I think I would argue that objects bundle data and functions in a useful way (when done right). Closures are more of an expressive convenience for constructing functions on-the-fly. The overreach for objects is to consider them as the only way to organize data and functions.

reply

---

cperciva 15 hours ago [-]

They're both wrong. Closures and Objects are both structures with function pointers and delusions of grandeur.

reply

taneq 15 hours ago [-]

Which is funny because I could never get anyone, when I first started programming in C and everyone was raving about how great 'object oriented programming' was, to tell me what 'an object' actually was. When I started learning C++ I was somewhat disappointed to find out an object was just a struct with some functions attached.

(It's kind of like when people told me about "user defined data types" and I couldn't wrap my head around how you could create a new data type with all its associated syntax. Then when I learned that they were just bundles of existing data types, though 'oh is that all?')

reply

---

ridiculous_fish 12 hours ago [-]

In Midori, objects are capabilities. This is possible because type safety is enforced and an object's type cannot be forged.

It's easy to imagine a capabilities system built around closures. For example, a capability to send on a socket could be an unforgeable closure that performs that sending. Are there any systems that work this way?

reply

andrewflnr 11 hours ago [-]

More details on Midori: http://joeduffyblog.com/2015/11/03/blogging-about-midori/ This is the first of several blog posts by one of the authors. I haven't read all of them, but liked what I read so far. Does anyone have other resources with more detail?

reply

humanrebar 5 hours ago [-]

As a counterpoint, people would make poor-man objects in ancient Fortran by using array indices instead of addresses to reference the object. And you could use integers and lots of conditionals to represent polymorphism. You'd have to ban all sorts of things to lock out developers from making their own custom object systems.

reply

---

In 'concept-oriented programming', you have object-like things but, orthogonally, you can inject behavior that executes before or after the methods of an object instance.

more detail: https://arxiv.org/ftp/arxiv/papers/1501/1501.00720.pdf

---

another thing OOP is good for is saving clutter in argument passing, via the implicit 'self' argument in method calls. When a bunch of methods are calling each other and passing the same bunch of variables in each call, you can put these variables into fields of an object, and then they don't have to explicitly appear as arguments in the method calls.

Without OOP, you could still pack relevant variables together into a struct, but you'd have to explicitly pass this struct. This is less relevant in Python (where you have to call "self.method" anyways, and the method signature has an explicit first 'self' argument), but more in C++ where 'this' is implicit in methods ("member functions").

We might want to separate this from OOP (so that OOP is only encapsulation of state) and generalize this so that you can implicitly access multiple namespaces from any function (instead of just providing implicit access to the instance variables from methods).

For example, a social networking service may have many user accounts, and each account may have many photo albums. Functions dealing with a single photo album (for example, code to select computer-curated 'highlights' from the photo album) might frequently have occasion to reference the containing user account, too. Rather than put the 'highlights'-computing function in an OOP 'photo album' object, you might just have a small 'photo album' object which encapulated access the the photo album's state, and have 'highlights' be a separate function, but one which can access the namespaces of both the photo album and the containing user account without having to prefix them with stuff like "the_photo_album." or "the_user_account.".

---

"it's kind of unfortunate when you have to use functions, because if you have to say, you know, HTMLElement.getChildren.whatever, it gets inverted with functions: whatever(getChildren(HTMLElement)). You have to call from the innermost one to the outermost... it's "backwards", right?"

---

" Object-Oriented Programming doesn't work that well for UIs. You want it to be declarative. HTML showed that you want a dialog, with a title bar, and a body, and it nests, to match the [UI] tree.

That works really well. It's succinct. Even in HTML it's succinct, right? Whereas with a [Java-style] object-oriented language, you've got to say, you know, createContainer(), addChild(), addChild(), addChild(), 'til the cows come home. And it doesn't look anything like... you can't pattern-match it and say "ah yes! this looks just like my UI!"

So people write these wrappers around Swing. Like there's Apache Jelly, which wound up with this XML framework to do Swing programming, that was 30% less verbose than Java.

What are the odds that XML's going to wind up being less verbose than anything? " -- [9]

---

" I’ll leave you with one more data point to ponder: Clojure has polymorphism, but it eschews concrete derivation. It has interfaces and protocols, but no inheritance chains. It’s not a design decision that gets put in the limelight much, but it is one I encourage you to study. This alone has tremendous impact on how maintainable systems are. "

---

" Bug: Renaming a function and forgetting to rename all of the overrides. Your Automobile class has a brake() function, and you decide to rename it to applyBrakePedal(). You get some compiler errors and fix up all of the callers, but you forget that the Batmobile subclass overrides brake() with logic to deploy the drag parachute, and now when Batman slams on the brakes, the parachute fails to deploy and he smashes into Macy's in spectacular fashion.

The Swift Fix: Override functions must use the override keyword. Now when you rename the superclass's function, all the subclass functions fail to compile because they're not overriding anything. Why did no one think of this before? I know you Eclipse users out there just click Refactor and go back to sleep, but for folks who use more primitive IDEs either by necessity or by choice, this is a great language feature. "

---