proj-oot-ootMemoryManagementNotes2

	"

Brad Abrams posted an e-mail from Brian Harry written during development of the .Net framework. It details many of the reasons reference counting was not used, even when one of the early priorities was to keep semantic equivalence with VB6, which uses reference counting. It looks into possibilities such as having some types ref counted and not others (IRefCounted?!), or having specific instances ref counted, and why none of these solutions were deemed acceptable.

    Because [the issue of resource management and deterministic finalization] is such a sensitive topic I am going to try to be as precise and complete in my explanation as I can. I apologize for the length of the mail. The first 90% of this mail is trying to convince you that the problem really is hard. In that last part, I'll talk about things we are trying to do but you need the first part to understand why we are looking at these options.
    ...
    We initially started with the assumption that the solution would take the form of automatic ref counting (so the programmer couldn't forget) plus some other stuff to detect and handle cycles automatically. ...we ultimately concluded that this was not going to work in the general case.
    ...
    In summary:
        We feel that it is very important to solve the cycle problem without forcing programmers to understand, track down and design around these complex data structure problems.
        We want to make sure we have a high performance (both speed and working set) system and our analysis shows that using reference counting for every single object in the system will not allow us to achieve this goal.
        For a variety of reasons, including composition and casting issues, there is no simple transparent solution to having just those objects that need it be ref counted.
        We chose not to select a solution that provides deterministic finalization for a single language/context because it inhibits interop with other languages and causes bifurcation of class libraries by creating language specific versions.

"

https://blogs.msdn.microsoft.com/brada/2005/02/11/resource-management/

---

http://stackoverflow.com/tags/garbage-collection/info

---

mb someday ask StackOverflow? for a survey of approaches to garbage collection in programming languages with the following goals:

I'm designing a programming language and i'm looking for suggestions for garbage collection/automatic memory management algorithms that meet the following criteria. Latency (GC pauses) must be minimized; but throughput performance is not important. The language is intended to be easy to reimplement on different platforms, so it is imperative that the GC technique in the reference implementation be simple. To reiterate, the algorithm under discussion could later be replaced with a more complicated, higher-performing GC algorithm on some implementations, but what i need for the reference implementation (ie what i am asking for here) is simplicity and ease of (correct) implementation, even at the expense of inefficiency. The data structures being managed will be concurrently accessed by multiple threads, and will contain reference cycles.

In more detail, design goals include:

The default garbage collection systems of language implementations such as the JVM are not suitable because (a) they can have long GC pauses, and (b) they are probably more concerned with efficiency than simplicity. Some of the other automatic memory management systems that i am currently looking at as potential models are Inferno's, Go's, and Python's; other suggestions, or comments and comparisons, are welcome.

http://stackoverflow.com/questions/tagged/programming-languages http://stackoverflow.com/questions/tagged/language-design http://softwareengineering.stackexchange.com/questions/tagged/programming-languages http://softwareengineering.stackexchange.com/questions/tagged/programming-languages+language-design

---

jeffdavis 6 hours ago [-]

Tangent:

In databases, it's common to do arena based allocation. You allocate objects of similar lifetime (life of a query, life of a transaction, life of a procedure execution, etc.), and free them all at once when the lifetime is up.

When hacking postgresql, for instance, using pfree() is fairly uncommon. You just allocate, (which goes to the current arena by default), and then the arena is wiped out all at once later.

Of course, sometimes you might use a lot of memory temporarily in a loop, and you need to pfree() it each iteration. But that is the exception.

I think of each arena as a tiny heap (though of course they can grow large).

reply

favorited 5 hours ago [-]

This was also an optimization you could do in Objective-C on NeXTSTEP?/OpenStep?