http://journal.stuffwithstuff.com/2010/09/18/futureproofing-uniform-access-and-masquerades/
points out three kinds of futureproofing in java:
these are all annoying boilerplate, but they are all good things to do because if you don't, and you want to make one of the following changes, then you must change every call site, instead of just one line. This is especially bad if your program is a shipped library and the call sites are in client code (e.g. you would have to make breaking/incompatible changes to your library):
now, Oot already deals with the first two of these, and maybe the third. If not, We should probably deal with the third, too. That is to say, when you call a constructor (if we have constructors at all; i'm leaning towards yes), you don't actually determine an implementation but merely an interface, and perhaps a factory method to determine the implementation. In other words, 'everything is an interface', like we always say.
in http://journal.stuffwithstuff.com/2010/10/21/the-language-i-wish-go-was/, he also points out a few more that Go handles that Java doesn't:
in http://journal.stuffwithstuff.com/2010/10/21/the-language-i-wish-go-was/, he also points others kinds of futureproofing that aren't needed in Java but that may be needed in other languages, such as Go:
---
should we have constructors, copy constructors, move constructors? and autodefined repr, equals (equivalence (ad hoc polymorphic)), structequals (true structural equality), memequals (reference equality), hash?
---
could require traits to be stateless and distinguish 'classes' which hold state
---
could have traits each have their own namespace and by default any instance methods and variables defined are in that namespace (note this requires explicit instance variable definitions). Now variable references are first looked up in local scope, then in the trait-local instance scope, and only then in the instance scope. Could also allow traits to declare instance methods and variables in instance scope, but with an explict notation (perhaps just Python self.varName for instance-level variables). If you do that, now two traits can introduce the same name without clashing, unless they intend to clash, without any compile-time checking (except for the trait names themselves, and for method names), and so we have commutative traits. no wait, you still have to compile-time check to prohibit the diamond problem, unless you require all fully-qualified names for inherited stuff.
however this compile-time checking either prohibits or at least doesn't help with dynamic addition of methods, right? so maybe require assertions in this case? that sounds troublesome, how would you upgrade an implementation of an API that used static methods to one using dynamically-generated ones without making the client assert the existence of these? if you use methodMissing, you make the client change to capitalized method names.
note that as i've written this paragraph, it conflicts with the previous suggestions, e.g. as written, traits can declare new instance variables. But you could forbid that, on the idea that even though methods are like variables, method defns are static and so allowed. or could just unify stateful 'classes' and traits.
---
could allow implicit local variable definitions (including ones which become included in closures, e.g. it's okay because it's lexically scoped) but require explicit instance variable definitions (e.g. implicit is not okay here because it's dynamically scoped, e.g. harder to read)
---
classes as 'state domains' to allow tracking of permitted mutable state/non-referential transparency within various functions
or perhaps this should be module-level?!? (modules as state domains, or modules as the domains with which a set of permitted mutable state domains are associated?) i doubt it..
---
" OOP Isn't a Fundamental Particle of Computing The biggest change in programming over the last twenty-five years is that today you manipulate a set of useful, flexible data types, and twenty-five years ago you spent a disproportionately high amount of time building those data types yourself.
C and Pascal--the standard languages of the time--provided a handful of machine-oriented types: numbers, pointers, arrays, the illusion of strings, and a way of tying multiple values together into a record or structure. The emphasis was on using these rudiments as stepping stones to engineer more interesting types, such as stacks, trees, linked lists, hash tables, and resizable arrays.
In Perl or Python or Erlang, I don't think about this stuff. I use lists and strings and arrays with no concern about how many elements they contain or where the memory comes from. For almost everything else I use dictionaries, again no time spent worrying about size or details such as how hash collisions are handled.
I still need new data types, but it's more a repurposing of what's already there than crafting a custom solution. A vector of arbitrary dimension is an array. An RGB color is a three-element tuple. A polynomial is either a tuple (where each value is the coefficient and the index is the degree) or a list of {Coefficient, Degree} tuples. It's surprising how arrays, tuples, lists, and dictionaries have eliminated much of the heavy lifting from the data structure courses I took in college. The focus when implementing a balanced binary tree is on how balanced binary trees work and not about suffering through a tangled web of pointer manipulation.
Thinking about how to arrange ready-made building blocks into something new is a more radical change than it may first appear. How those building blocks themselves come into existence is no longer the primary concern. In many programming courses and tutorials, everything is going along just fine when there's a sudden speed bump of vocabulary: objects and constructors and abstract base classes and private methods. Then in the next assignment the simple three-element tuple representing an RGB color is replaced by a class with getters and setters and multiple constructors and--most critically--a lot more code.
This is where someone desperately needs to step in and explain why this is a bad idea and the death of fun, but it rarely happens.
It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the fundamental particle of computing that some people want it to be. When blindly applied to problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet there's often an aesthetic insistence on objects for everything all the way down. That's too bad, because it makes it harder to identify the cases where an object-oriented style truly results in an overall simplicity and ease of understanding.
(Consider this Part 2 of Don't Distract New Programmers with OOP.) "
---
foundart 3 days ago
| link |
I see the Julia home page lists multiple dispatch as one of its benefits. Since my only real exposure to multiple dispatch was when I inherited some CLOS code where it was used to create a nightmare of spaghetti, I'm wondering if any Julia fans here would care to elaborate on how they've used multiple dispatch for Good™ instead of Evil™
reply
astrieanna 2 days ago
| link |
Multiple dispatch lets you make math operators work like they do in path. That means that you can use `+` the same way on ints, floats, matrices, and your own self-defined numeric type. If `x` is a variable of your new numeric type, OO languages make making `x + 5` work easy, but `5 + x` super hard. Multiple dispatch makes both cases (equally) easy. This was, as I understand it, the major reason that Julia uses multiple dispatch.
Multiple dispatch can make interfaces simpler: you can easily offer several "versions" of a function by changing which arguments they take, and you can define those functions where it makes sense, even if those places are spread across multiple modules or packages. Julia provides great tools (functions) that make methods discoverable, help you understand which method you're calling, and help you find the definition of methods.
Looking at some Julia code (the base library or major packages) might give you a better idea of how Julia uses multiple dispatch.
--
http://bytes.com/topic/python/answers/160900-isa-keyword
'isa' keywordtalin at acm dot org P: n/a
talin at acm dot org Although I realize the perils of even suggesting polluting the Python namespace with a new keyword, I often think that it would be useful to consider defining an operator for testing whether or not an item is a member of a category.
Currently, we have the 'in' operator, which tests for membership within a container, and that works very well ...
I propose the word 'isa' because the term 'isa hierarchy' is commonly used to indicate a tree of types. So the syntax would look like this:
if bear isa mammal: if name isa str:
(I suppose it would look prettier to put a space between "is" and "a", but there are many obvious reasons why you don't want "a" to be a keyword!)
The "isa" operator would of course be overloadable, perhaps by an accessor functions called __isa__, which works similarly to __contains__. The potential uses for this are not limited to isinstance() sugar, however. For example:
if image isa gif: elif image isa jpeg: elif image isa png:
P: n/a Rocco Moretti Terry Hancock wrote:
On Thursday 01 September 2005 07:28 am, Fuzzyman wrote:
What's the difference between this and ``isinstance`` ?
I must confess that an "isa" operator sounds like it would
have been slightly nicer syntax than the isinstance() built-in
function. But not enough nicer to change, IMHO.Especially conidering that checking parameters with "isinstance" is considered bad form with Python's duck typing.
What's the difference between this and ``isinstance`` ?
What's the difference between 'in' and 'has_key()"? 1) Its shorter and
more readable, 2) it can be overridden to mean different things for
different container types.
What's wrong with:
if image.isa(gif):
elif image.isa(jpeg):
elif image.isa(png):
That forces the classification logic to be put into the instance,
rather than in the category. With the "in" keyword, the "__contains__"
function belongs to the container, not the contained item, which is as
it should be, since an item can be in multiple containers.
Especially conidering that checking parameters with "isinstance" is
considered bad form with Python's duck typing.---
consider 'static' super in the sense that super could cause a one-time inclusion of new methods and fields instead of causing a chain lookup whenever the object is accessed? probably not..
--
also, https://news.ycombinator.com/item?id=7842047 points out that Python's super is a little too powerful, making it too hard to implement:
carbon12 9 days ago
| link |
> What parts of Python 3 syntax are missing? Which parts of the library don't compile?
The only things that don't compile properly are certain uses of "super()". super() without arguments is a very strange beast that captures the first argument of the function (interpreting it as the self object), and needs to infer its class.
Other than that, all the Python scripts in the Python 3 standard library will compile.
reply
--
mb have multimethods but only commutative ones, e.g. for an operator like addition over ints or floats, define +(int, int), +(int, float), +(float, float)
?
nah
--
http://nice.sourceforge.net/visitor.html
--
mb have multimethods (and unattached functions in general), but then for encapsulating state (or rather, for impure operations), always use single-dispatch objects with methods like beautiful Io? (i mean, Io has single-dispatch objects, not that Io has both of these)
so addition could be a normal function, as could sprintf, but printf would be an object method
if you do this, though, then what about accesses to aliased reference variables which are 'within one of the current monads'? this should logically be done lazily, because accesses to something 'within one of the current monads' should not 'count' as aliased reference right? but in that case since there we may have (commutative) multiple monads, this doesn't jive with the 'one receiver' for non-referentially transparent things (although maybe it does, since each such thing's 'receiver' just means either that the thing itself is accessed only via a API/set of methods, or that accesses to it are syntactic sugar for calls to methods of its monad?).
--
" R does not prevent the declaration of ambiguous multi-methods. At each method call, R will attempt to find the best match between the classes of the parameters and the signatures of method bodies. Thus add(r, l) would be treated as the addition of two “Point” objects. The resolution algorithm differs from Clos’s and if more than one method is applicable, R will pick one and emit a warning. One unfortunate side effect of combining generic functions and lazy evaluation is that method dispatch forces promises to assess the class of each argument. Thus when S4 objects are used, evaluation of arguments becomes strict " -- http://r.cs.purdue.edu/pub/ecoop12.pdf
---
http://www.haskell.org/haskellwiki/Class_system_extension_proposal
---
as in the above proposal, for oot, need to ensure that default methods in a typeclass can be provided by any descendent of that typeclass (to be inherited by THEIR descendents), not just in the original typeclass defining that method, as in Haskell currently
---
so, what's the difference between a mixin and a superclass? i think i'll define it like this:
in oot, a superclass is the more general thing, and operates like a superclass in Python (well, at least so far; i still have to grok CLOS and see if i like that better)
a mixin is like a superclass, except that internal state added by the mixin is only visible to that mixin, not to other methods in the class or to other mixins. For instance, if mixin "M1" is added to class "A", and M1 has an internally visible field "_M1F1", then if methods of A outside M1 attempt to access A._M1F1, they will get nothing, and if they create it, it will be a different variable.
---
the comments i just wrote in [proj-oot-ootLevelsNotes1] regarding how Oot should have both 'or' and muKanren 'disj' just be 'or' and use ad-hoc polymorphism to disambiguate led me to speculate once again on the different, dissociable aspects that we associate as part of 'oop':
so in haskell we get the encapsulation via typeclasses; this should be enough to treat both 'disj' and 'or' as 'or'. Clojure's multimethods should support this, too.
---
i guess i like the idea of using interfaces to encapsulate the representation of data structures, but not for everything else; all of the various methods and utilities that are provided by a module to operate on a new data structure shouldn't be object methods, they should be ordinary functions. So, many data structury Oot modules should provide a small interface, one or more implementations of an interface, and many top-level functions that take one or more instances of that interface as arguments.
now what about the problem in Python where sometimes you have something like operation1(list_var) and other times you have something like list_var.operation2(), which makes it awkward to chain/pipeline together operation1 and operation2? i think this is a syntactic problem and we should deal with it syntactically. perhaps with x.f1.f2 = f2(f1(x)), or perhaps x
| f1 | f2 |
---
" Btw, if you ask a C++ developer to describe OOP, he'll say it's the ability of having structs with a VTable attached. Most people think OOP is about state. That's not true, OOP is about single (or multi) dispatch and subtyping (having a vtable), hence it's not at odds with FP, unless you want it to be ;-) "
---
"
alankay1 11 hours ago
Object oriented to me has always been about encapsulation, sending messages, and late-binding. ...
reply "
---
a comment by Rimantas Liubertas on OOP methodology is interesting:
https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53#.imao8qpbf
he suggests:
the first one is too vague for me and i dont agree with the second one (but that's just semantics anyways). The third and fourth are interesting, though.
---
notes on https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53#.o32umqr5f
the three pillars of OOP are:
the (purported) goal is code reuse.
problems with inheritance:
Banana Monkey Jungle Problem: in practice there tend to be so many inter-class dependencies that end up having to import a ton of different classes just be able to reuse one class. Composition works better (my note: i don't see how that would help?)
diamond inheritance problem: actually he just describes a simpler multiple inheritance problem (namely if two parent classes conflict, which one to choose?). One solution is never to use multiple inheritance and to use composition instead.
Fragile Base Class Problem: subclasses are coupled to the base class. Example is given in which a base class changes in a way which doesn't break its external contract, yet still breaks subclasses. One solution is composition.
The example is:
initial base class:
List;
public class Array
{
private ArrayList<Object> a = new ArrayList<Object>();
public void add(Object element)
{
a.add(element);
}
public void addAll(Object elements[])
{
for (int i = 0; i < elements.length; ++i)
a.add(elements[i]); // this line is going to be changed
}
}
subclass:
public class ArrayCount extends Array
{
private int count = 0;
@Override
public void add(Object element)
{
super.add(element);
++count;
}
@Override
public void addAll(Object elements[])
{
super.addAll(elements);
count += elements.length;
}
}
change in base class:
public void addAll(Object elements[])
{
for (int i = 0; i < elements.length; ++i)
add(elements[i]); // this line was changed
}
now the subclass doublecounts during addAll; previously the base class directly called ArrayList?.add, but now it goes thru its own add method, which has been overridden by the subclass to add a count.
The Hierarchy Problem: eg if you have a company with documents, do you have a Company class with a Documents class in it, or a Documents class with a Company class in it? One solution is to use tags for stuff like this, and only use hierarchies in cases of exclusive containment (eg socks are exclusively contained in your dresser; files on a hard drive are exclusively contained in their directories).
Encapsulation:
Problem with encapsulation: if an object holds a reference to another object, and that reference is aliased elsewhere, then people elsewhere can affect the internal state of this object without it knowing. The solution is value types (or, cloning everything); but the author notes that some things (operating system resources involved with external impure operations) cannot be cloned.
Imo this is a problem, but not really with 'encapsulation'; encapsulation still fulfills its promise of allowing other objects not to worry about the internal representation of this object. Also, just because there is a problem with impurity doesn't mean that encapsulation can't be used in those cases where everything is pure.
Polymorphism:
The author likes polymorphism but says that it doesn't require OOP, so therefore it's not an argument for OOP.
---
" The real problem with OOP is not that it forces you to do things a particular way, it's that what's new about it (privileging the first argument of each procedure, inheritance, lots of hidden mutable state) is bad, and what's good about it (encapsulation, polymorphism) is not new. "
-- https://news.ycombinator.com/item?id=12155019
" > what's new about it (privileging the first argument of each procedure
Multimethods, CLOS, Dylan? "
---
" Things OO languages 'force' us seem to be relatively easy to use: sending a message to a certain object, thinking in categories (elephants, houses, chairs, moveable objects, ...), state (position, speed, size, shape),... Thus it is not surprising that the overwhelming majority of software written in the last 10 years has been written in OO languages. "
---
" 6. Interfaces achieve polymorphism without inheritance.
Interfaces long for inheritance-like features. For example, see Java 8's introduction of default methods, or the boilerplate involved in implemetning certain Haskell typeclasses. "
---
threatofrain 18 hours ago [-]
I think that inheritance and composition are not the interesting aspects of object oriented design, or of any language or paradigm. These are tools for code brevity / reuse / elegance. Advanced copying or ctrl-c.
I would also argue that too much elegance in any language or design paradigm faces the same problem, in that elegance is often (but not completely) in tension with modularity or granularity of control, which is an engineering subgoal, because a common engineering situation is to experience system change (feature growth or reorganization of code), and improvements in granularity or modularity makes system restructuring easier.
I think that object oriented design is more essentially about modelling distributed state, because you probably have multiple objects with their own internal state. I believe this means that object oriented design is highly concerned with protocolized communication and synchronization between distributed states, whether via messages or channels or something else.
I believe that in distributed situations, object oriented design can be very harmonious with functional reactive programming strategy. You can easily and usefully have a situation where an object functionally updates its internal state with a typed stream of inputs.
reply
---
i probably said this before, but rather than resolving inheritance conflicts implicitly by having the language specif an implicit ancestor precedence ordering for multiple inheritance, i'd rather force the implementor of a subclass to explicitly resolve these conflicts, and to present a compile error until they do so.
For example, if class D inherits from B and C, and both B and C have an attribute/field/method "f1", and the subclass didn't override f1, then this is a compiler error until the implementor resolves it.
Resolutions could be:
however, in the clean case of the 'diamond problem', where B and C both inherited f1 from A, and didn't override it, perhaps this would not be a compiler error(?)
---
" Sometimes you just really need dynamic typing. One of Fantom's pivotal features is the ability to call a method using strong or dynamic typing. If you call a method via the "." operator, the call is type checked by the compiler and compiled into an efficient opcode. But you can also use the "->" operator to call a method dynamically. This operator skips type checking, and can be used to implement duck typing. The "->" operator actually routes to the Obj.trap method which can be overridden to build all sorts of nifty dynamic designs. "
---
" There are plenty of good reasons why Java and C# ended up using the class/interface model. Multiple inheritance offers lots of power, but comes at the expense of complexity and some pretty nasty pitfalls. Fantom takes a middle of the road approach called mixins. Mixins are essentially Java or C# interfaces that can have method implementations. To avoid some of the pitfalls of true multiple inheritance, mixins restrict features such as fields which store state. Mixins are a very nice feature in the Fantom toolbox when it comes to designing your object oriented models. "
---
i dont understand this and i'm not sure that i need to, just putting this here for future reference:
"
cynicalkane 626 days ago [-]
OO--in Haskell, at least--is most succinctly represented as existential and higher-order types with constraints, which break some important assumptions used for proving things in FP. If you are message passing s.t. the call site is opaque to the caller, this breaks more assumptions. The assumptions in question happen to be important assumptions for the currently cool and trendy research in FP, which is important when you're an academic with a career.
Furthermore, the types required are a bit more general than OO, so once you've introduced the former it doesn't make sense to constrain your landscape of thought to the latter.
tel 626 days ago [-]
I'm not sure it really breaks assumptions: it just requires coinduction and bisimulation instead of induction and equality. Coinduction and Bisimulation aren't as well understood today and are harder to use, so it's a bit of a rough project to move forward with.
What assumptions are you referring to? "
---
wyager 626 days ago [-] ... If you mean mutable + stateful programming with objects as an abstraction mechanism (which is what people usually mean by "OO"), then yes, it is in some ways opposite to pure FP.
OO to FP is kind of like Turing Machines to the Lambda Calculus. ...
---
" Problem 3: CLOS
Note: this section contains a few factual errors pointed out by Pascal Costanza in a comment below. Some of his corrections are also, of course, opinions, and I've commented on them later in the thread. In any case, while I thank Pascal for his corrections, the errors I've made are utterly irrelevant to my conclusions.
CLOS is icky. I haven't worked with Smalltalk a whole lot, but I've worked with it enough to know that to do OOP right, you have to do it from the ground up. CLOS was bolted on to Common Lisp. Everyone knows it, although not many people want to admit it.
...CLOS has problems. One obvious one is that length isn't a polymorphic function....
Another problem is the slot accessor macros. They're insanely clever, but clever isn't what you want. You want first-class function access, so you can pass the getters and setters to map, find-if, etc. You can work around these things, but they're a leaky abstraction, and enough of those will add up to significant mental resistance to getting things done. It's like all those weird little rules in Perl: non-orthogonal rules that add up to forgetting the language every time you leave it alone for more than a week.
What you really want in lieu of CLOS is... complicated. It's a hard problem. Lisp wants to be constructed entirely from macros.((but))...Having the object system — something pretty fundamental to the language, you'd think — written as a bunch of macros doesn't feel right when all is said and done.
When you work with Ruby or Smalltalk or any suitably "pure" OO language (Python doesn't quite count, unfortunately; its bolts are also showing), you realize there are some distinct advantages to having everything be an object. It's very nice, for instance, to be able to figure out what methods are applicable to a given class (e.g. "foo".methods.sort.grep(/!/) from Ruby), and to be able to extend that list with your own new methods. It's a nice organizational technique.
Of course, that forces you into a single-dispatch model, so it becomes harder to figure out what to do about multi-methods. Some Python folks have implemented multi-methods for Python, and they do it by making them top-level functions, which makes sense (where else would you put them?) I'm not claiming that Smalltalk's object model is going to translate straight to Lisp; you have to decide whether cons cells are "objects", for instance, and that's a decision I wouldn't wish on my worst enemy. I don't envy the person who tackles it.
....But changing CLOS to be simpler and more seamless essentially means replacing it. And replacing it is probably best done inside the implementation. ... Or maybe you could go the Haskell route and not have OOP at all. That seems to alienate most programmers, though, despite the attractions of not having to create nouns for everything. (Have you ever noticed that turning a non-object-oriented program into an object-oriented one in the same language that does the same thing essentially doubles its size? Try it sometime...) At the risk of predicting future fashion trends, which is rarely a good idea, I'll venture that objects are going to continue to be trendy for at least a few more decades. So I think Lisp needs some form of "seamless" OOP. " -- [2]
(note: excerpt from Costanza's response in comment was:
" ...
(specializer-methods (class-of object))
This works for any object because all objects have a class in Common Lisp, without any exception (including cons cells).
)
---
more problems with Oop (from a post about good things about Clojure):
" 3) Data first programming
Walking away from object-oriented languages is very freeing.
I want to design a model for the game of poker. I start by listing the nouns3: “card”, “deck”, “hand”, “player”, “dealer”, etc. Then I think of the verbs, “deal”, “bet”, “fold”, etc.
3: For the record, I know that this isn’t the “right” way to design OO programs, but the fact that I have to acknowledge this proves my point. ↩
Now what? Here’s a typical StackOverflow? question demonstrating the confusion that comes with designing like this. Is the dealer a kind of player or a separate class? If players have hands of cards, how does the deck keep track of what cards are left?
At the end of the day, the work of programming a poker game is codifying all of the actual rules of the game, and these will end up in a Game singleton that does most of the work anyway.
If you start by thinking about data and the functions that operate on it, there’s a natural way to solve hard problems from the top-down, which lets you quickly iterate your design (see below). You have some data structure that represents the game state, a structure representing possible actions a player can take, and a function to transform a game state and an action into the next game state. That function encodes the actual rules of poker (defined in lots of other, smaller functions). "
--- [3]
---
from Python:
PEP 487: Simpler customization of class creation
It is now possible to customize subclass creation without using a metaclass. The new __init_subclass__ classmethod will be called on the base class whenever a new subclass is created:
class PluginBase?: subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)class Plugin1(PluginBase?): pass
class Plugin2(PluginBase?): pass
---
PEP 487: Descriptor Protocol Enhancements
PEP 487 extends the descriptor protocol has to include the new optional __set_name__() method. Whenever a new class is defined, the new method will be called on all descriptors included in the definition, providing them with a reference to the class being defined and the name given to the descriptor within the class namespace. In other words, instances of descriptors can now know the attribute name of the descriptor in the owner class:
class IntField?: def __get__(self, instance, owner): return instance.__dict__[self.name]
def __set__(self, instance, value):
if not isinstance(value, int):
raise ValueError(f'expecting integer in {self.name}')
instance.__dict__[self.name] = valueclass Model: int_field = IntField?()
---
someone's opinion about OOP:
" Those features which are potentially good (data hiding, contract enforcement, polymorphism) are not unique to OOP and, in fact, stronger versions of these things are available in non-OOP languages. Those features that are unique to OOP (dependency injection, instantiation) are awful and exist only because OOP is awful. "
---
" what do they view as the core ideas of OOP? CodeBetter? focuses on this list of features:
Encapsulation (((they also mention getter and setter properties here))) ...
Abstraction (((the description is confusing but mb they mean having base classes? they say "As development progresses, programmers know the functionality they can expect from as yet undeveloped subsystems. "))) ...
Inheritance Objects can relate to each other with either a “has a”, “uses a” or an “is a” relationship. ...
Polymorphism ... "
in a Chair example: " it is only necessary to figure out what your application needs to know about and do with chairs "
..."Structs (structures), records, tables, and other ways of organizing related information predated object oriented programming."
"...The most over-used and rather worthless discussion on inheritance that you will see revolves around the “Is-A vs Has-A” discussion. For example, a car is-a vehicle but has-a steering wheel. The idea these authors are chasing is that your car class should inherit your vehicle class and have a steering wheel as a member...The problem here is that inheritance is mixing together several things: you inherit “typeness”, interface, and implementation all at the same time. However, all of the examples focus on interface while talking about “typeness”. The abstract code doesn’t care that a car “is-a” vehicle, just that the objects respond to a certain set of functions, or interface. In fact, if you want to give your chair class accelerate(), brake(), turn_left() and turn_right() methods, shouldn’t the abstract code be able to work on chairs then? Well, of course, but that doesn’t make a chair a vehicle."
" In Clojure, inheritance is simple:
(derive ::rect ::shape) nil
(derive ::circle ::shape) nil
(isa? ::circle ::shape) true
(isa? ::rect ::shape) true
Here, I get to define my data-type hierarchy independently of my functions and independently of any state. We do not need OOP to have inheritance. "
"
Also, there are the SOLID principles, which describe good architectural ideas for software, and which assume that OOP programming is the best way to implement them:
Single responsibility principle
a class should have only a single responsibility (i.e. only one potential change in the software’s specification should be able to affect the specification of the class) Open/closed principle
“software entities … should be open for extension, but closed for modification.” Liskov substitution principle
“objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” See also design by contract. Interface segregation principle
“many client-specific interfaces are better than one general-purpose interface.”[8] Dependency inversion principle
one should “Depend upon Abstractions. Do not depend upon concretions.”[8]
Dependency injection is one method of following this principle.
"" All the same, when I go to a job interview, the things I get asked about the most often are encapsulation and polymorphic dispatch and inheritance, so simply by having those 3 on the list, I think I am covering the core ideas that people associate with OOP. "
" It is tragic that OOP has had to retreat from such a powerful idea ((inheritance)), to the extent that composition is now the favored strategy: ... Back in the 1960s, OOP introduced the concept of inheritance, and yet now we know OOP is a terrible way to implement inheritance. And yet, we often want inheritance, not composition.
Inheritance is good. Nearly all data-types belong to a hierarchy, and my code should be able to represent that hierarchy without fear. If I write software for a store, I will probably have a “sell price” and a “buy price”, both of which are children of “price” which is a child of “transaction metric” which is a child of “decimal” (or “number”). I don’t want to model this with composition, nor do I want to worry about brittle base classes and tight-coupling between the interface and the implementation. To escape from these worries, I want my data-types declared in a hierarchy that is all-together separate from my code’s behavior (the functions that I write). Functional languages like such as Shen, Haskell or Clojure allow for data-type definitions that are separate from the behavior of my code. Java does not. Ruby does not. Python does not. "
((i think what e wants is interface/typeclass inheritance))
" 1.) Those items which are potentially good (data hiding, contract enforcement, polymorphism) are not unique to OOP and, in fact, stronger versions of these things are available in non-OOP languages.
2.) Those items that are unique to OOP (dependency injection, constructor values) are awful and exist only because OOP is awful. " --- [4]
"Concurrency Oriented Programming ((eg Erlang)) also provides the two major advantages commonly associated with object-oriented programming. These are polymorphism and the use of defined protocols having the same message passing interface between instances of different process types." -- [5]
[6] wants to keep the following things separate:
and wants to keep 'data-type hierarchy' separate from 'behavior' -- that is, he doesn't like how in many OOP languages, subclassing of types is tied up with inheritance of implementations. He points out (above), that clojure has constructs (derive, isa) that do data type subclassing without method implementation inheritance.
I think what e means about keeping 'enforcing a contract' separate from 'mutating state' is that he wants behaviors (which is what obeys contracts, i guess) to be functions, not methods, and he wants mutable state to be stored separately in a small number of global variables.
" I recall when I first discovered Ruby On Rails, back in 2005. It had tools that would auto-generate my model classes for me, which I thought was very cool. More so, the model classes were simple. If I had database tables for “users”, “products”, “sales” and “purchases”, the tools would probably generate 4 files for me, that would look like this:
class User end
class Product end
class Sale end
class Purchase end
At the time, I thought, this is fantastic! I didn’t have to write getters and setters, that was implied! My limited experience with Java had left me with a bad taste in my mouth — Java was verbose! I had to write every function to get and set a variable. How tedious! (Even PHP, a script language, forced me to write getters and setters, just like Java.) Ruby saved us from all that. But the present constantly changes the meaning of the past. The years go by and we see things differently. I spent 2 years away from Ruby, working with Clojure, and then I came back to Ruby and I was astonished to realize that it now seemed high-ceremony to me. Look at those 4 empty classes! That’s 4 files, and 8 lines of code, that are mere boilerplate — they don’t help me with my task, I just have to write it so that when method_missing is called it will have some context to work with. What seemed low-ceremony to me, in 2005, strikes me as high-ceremony in 2014. "
"But they can never get there, since the language forces them to declare data-types, behavior and state all in one spot. "
---
should support multiple constructors (polymorphism by argument type, in general)
---
i've been thinking of objects in Oot as "packages of state", eg mutable structs accessed via reference, but i guess this is actually orthogonal from "a data representation encapsulated with the methods needed to access it". You can have encapsulation of data representation even if the struct is immutable and passed by value. In fact, the same data representation and methods can be used to operate on a mutable struct in-place, or on immutable data. And in Oot, we want immutable to be the default.
Although i disagree with much of it, [7]