proj-oot-ootLibrariesNotes7

https://github.com/Majoolr/ethereum-libraries?files=1

---

https://github.com/Majoolr/ethereum-libraries/tree/master/BasicMathLib

---

i briefly looked again at the popular alternative libcs.

dietlibc seems to be the smallest one, although musl seems to be the most popular one (b/c dietlibc leaves out some functionality).

this is the most seen comparison chart: http://www.etalabs.net/compare_libcs.html

also possibly relevant:

https://www.reddit.com/r/programming/comments/t32i0/smallest_x86_elf_hello_world/

---

" The testing package also has an addition. The new Helper method, added to both testing.T and testing.B, marks the calling function as a test helper function. When the testing package prints file and line information, it shows the location of the call to a helper function instead of a line in the helper function itself. "

---

"We argue that built-in primitive types, notably pointers (refer- ences), should come with efficient discriminators, not just equality tests, since they facilitate the construction of discriminators for ab- stract types that are both highly efficient and representation indepen- dent. ... To illustrate this, let us consider the problem of pointer discrim- ination

finding all the duplicates in an input sequence of pointers; that is, partitioning the input according to pointer equality. This is the problem at the heart of persisting (“pickling”) pointer data structures onto disk, contracting groups of isomorphic terms with embedded pointers, computing joins on data containing pointers, etc. Let us try to solve pointer discrimination in ML. 2 Pointers are mod- eled by references in ML, which have allocation, updating, dereferencing and equality testing as the only operations....Having only a binary equality test carries the severe disadvantage, how- ever, that partitioning a list of n references requires O(n^2) equality tests " [1]

" An alternative to ML references is to abandon all pretenses of guar- anteeing representation independence and leaving it in the hands of the developers to achieve whatever level of semantic determinacy is required. This is the solution chosen for object references in Java, which provides a hash function on references.

3 We use Java as a proxy for any language that allows interpreting a pointer as a sequence of bits, such as C and C++, or provides a hashing-like mapping of references to integers, such as Java and C#.

Hashing supports efficient associative access to references. In particular, finding duplicate references can be performed by hashing references into an array and processing the references mapped to the same array bucket one bucket at a time. The price of admitting hashing on references, however, is loss of lightweight implementation of references and loss of representation independence: it complicates garbage collection (e.g. hash values must be stored for copying garbage collectors) and makes execution potentially nondeterministic. Computationally, in the worst case it does not even provide an improvement: All references may get hashed to the same bucket, and unless the hashing function is known to be perfect, pairwise tests are necessary to determine whether they all are equal. It looks like we have a choice between a rock and a hard place: Either we can have highly abstract references that admit a simple, compact machine address representation and guarantee deterministic semantics, but incur pro- hibitive complexity of partitioning-style bulk operations (ML references); or we can give up on light-weight references and entrust deterministic program semantics to the hands of the individual developers (Java references). The problem of multiple run-time representations of the same semantic value is not limited to references. Other examples are abstract types that do not have an unchanging “best” run-time representation, such as sets and bags (multisets). For example, it may be convenient to represent a set by any list containing its elements, possibly repeatedly. ... In this paper we show that execution efficiency and representation in- dependence for generic sorting and partitioning can be achieved simultane- ously . We introduce a bulk operation called discrimination , which general- izes partitioning and sorting: It partitions information associated with keys according to a specified equivalence, respectively ordering relation on the keys. For ordering relations, it returns the individual partitions in ascend- ing order. ... What a discriminator does is surprisingly complex to define formally, but rather easily described informally: It treats keys as labels of values and groups together all the values with the same label in an input sequence. The labels themselves are not returned. Two keys are treated as the “same label” if they are equivalent under the given equivalence relation. The para- metricity property expresses that values are treated as satellite data , as in sorting algorithms (Knuth 1998, p. 4) (Cormen et al. 2001, p. 123) (Henglein 2009, p. 555). In particular, values can be passed as pointers that are not dereferenced during discrimination A discriminator is stable if it lists the values in each group in the same positional order as they occur in the input. A discriminator is an order discriminator if it lists the groups of values in ascending order of their labels " [2]

stevenschmatz 14 hours ago [-]

What's the catch? Are there any preconditions on the input required?

reply

kraghen 12 hours ago [-]

All orderings must be specified as a reduction to a primitive order using the fact that if you have an equivalence relation on some type A and a reduction f : B -> A then you have an equivalence on B defined by x = y when f(x) = f(y).

Now, take the rational numbers. For the usual binary comparison we can simply define (a/b) < (c/d) as ad < cb. It's not obvious how to express this as a reduction to a primitive ordering (the answer is to compute the continued fraction).

In fact, I'm not aware of any systematic way of deriving discrimination-suitable orderings from binary predicates -- it might be an open research problem, as far as I am aware.

reply

KirinDave? 12 hours ago [-]

> In fact, I'm not aware of any systematic way of deriving discrimination-suitable orderings from binary predicates -- it might be an open research problem, as far as I am aware.

That'd be an even more remarkable discovery in light of the stuff you worked on though, wouldn't it?

reply

kraghen 12 hours ago [-]

It might, I never really discussed this aspect with Fritz! For my thesis I was mostly focussed on applications to database queries, and I never encountered any concrete examples of orderings that couldn't be dealt with in an obvious way using discrimination based sorting.

reply

pjmlp 6 days ago [-]

> You don't need database providers to be integrated in the language spec. Not in CL and not in many other successful languages.

Many people underestimate how useful Perl DBI, JDBC, ODBC, Python DB-API, ADO.NET are.

Which is why, in spite of all design flaws, even Go has a database interface defined on their core library.

---

junke 6 days ago [-]

Not sure what you count as higher order abstractions, but lparallel, fset, cl-ppcre and other libraries make use of compiler macros; this is relevant w.r.t. efficiency.

reply

KirinDave? 6 days ago [-]

Haxl is modern. Ponylang's concurrency is modern. Rust's concurrency is modern.

lparallel and fset's affordances are not. I'm not sure why a single threaded regular expression compiler is mentioned here.

I'd have to check. Is fset even using the current standard (not the newest stuff) for immutable-friendly data structures? Last time I checked they had used a lot of older stuff from Okasaki's work and much of that has been improved upon substantially now.

Even Clojure is out of date, compared to this year's innovations!

reply

---

https://jakevdp.github.io/PythonDataScienceHandbook/

---

bjoli 6 days ago [-]

One thing that makes racket shine is it's macro facilities. Syntax case is nice and all that, but Jesus Christ in a chicken basket I wish scheme would have standardised on syntax-parse.

Syntax case vs. syntax parse isn't and will never be close to a fair comparison. Not only is it more powerful, it also provides the users of your macros with proper error messages. It blows both unhygienic and other hygienic macro systems out of the water for anything more complex than very basic macros.

reply

agumonkey 6 days ago [-]

Here's the doc for the curious http://docs.racket-lang.org/syntax/stxparse-intro.html

Interesting system indeed

reply

rkallos 6 days ago [-]

100% agreed. After using syntax-parse, it pains me to use anything else. It's a gem.

reply

---

farray 222 days ago

parent [-]on: Channels in Common Lisp

I once tried ChanL? but it was buggy, and concurrency bugs are the worst to debug.

lparallel on the other hand was solid to me, though I didn't like its API and had to (trivially) build my own message-passing abstractions on top of it.

fiddlerwoaroof 221 days ago [-]

I've been a bit interested in this, basically an attempt to bring some of the nice things of the Erlang/OTP platform to CL:

http://mr.gy/blog/erlangen-intro.html

---

http://popcon.debian.org/by_vote

https://brew.sh/analytics/install-on-request/

---

nyrikki 18 hours ago [-]

I agree but wanted to add in pandas and seaborn.

I actually keep a jupyter qtconsole open to use them for ad-hoc data visualization. Pandas replaced excel for me a while ago and I cringe every time I need to abandon seaborn for a Tableau workbook these days.

GUI Visualization tools like Tableau or PowerBI? seem to error towards presentation, while the defaults for seaborn help discover and visualize data while still producing results good enough to make C*O's happy.

reply

---

the js stats libraries used by this: http://www.sumsar.net/best_online/

"Libraries used: jStat for probability distributions, Flot for plotting and JQuery for this and that"

---

from [3]:

Feature 9: Standard library additions functools.lru_cache

    A LRU cache decorator for your functions.
    From docs.
    @lru_cache(maxsize=32)
    def get_pep(num):
        'Retrieve text of a Python Enhancement Proposal'
        resource = 'http://www.python.org/dev/peps/pep-%04d/' % num
        try:
            with urllib.request.urlopen(resource) as s:
                return s.read()
        except urllib.error.HTTPError:
            return 'Not Found'
    >>> for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
    ...     pep = get_pep(n)
    ...     print(n, len(pep))
    >>> get_pep.cache_info()
    CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)

--- from [4]:

Feature 9: Standard library additions enum

    Finally, an enumerated type in the standard library.
    Python 3.4 only.
    >>> from enum import Enum
    >>> class Color(Enum):
    ...     red = 1
    ...     green = 2
    ...     blue = 3
    ...
    Uses some magic that is only possible in Python 3 (due to metaclass changes):
    >>> class Shape(Enum):
    ...     square = 2
    ...     square = 3
    ...
    Traceback (most recent call last):
    ...
    TypeError: Attempted to reuse key: 'square'

---

http://www.craigkerstiens.com/2017/06/08/working-with-time-in-postgres/ https://web.archive.org/save/https://news.ycombinator.com/item?id=14517982

---

https://tinyapps.org/

---

pandas "can act in unexpected ways - throw in an empty value in a list of integers and you suddenly get floats (I know why, but still), or increase the number of rows beyond a certain threshold and type inference works differently."

" 2) Depends a bit on your background, but to me this is not really unexpected. Integers don't have a well-defined "missing" value while Floats do, so Pandas is trying to help you by not using python objects and instead converting to the "most useful" array type. It only does so if it can convert the integers without loss of precision. "

" 1. the weird handling of types and null values (#4) 2. the verbosity of filtering like `dataframe[dataframe.column == x]` and transformations like `dataframe.col_a - dataframe.col_b`, compared to `dplyr` in R 3. warts on the indexing system (including MultiIndex?, which is very powerful but confusing) "

paultopia 3 days ago [-]

  1. 2 is a big issue for me. Filtering, subsetting, these are all really arcane-feeling transformations, there's a lot of weirdness with view vs copy, etc.

reply

filmor 3 days ago [-]

You can write 2) as dataframe.query("column == x").

reply

---

 jampekka 3 days ago [-]

I would love to see some sort of "smarter" indexing in the engine. I use pandas quite a bit, but I've never really understood the rationale behind the indexing, especially why indexes are treated so separately from data columns. I seem to be resetting and recreating indexes all the time, and use the .values a lot.

More SQL-style indexing would be a lot more intuitive at least for me.

reply

nerdponx 3 days ago [-]

I used to hate it, but I've come around to its usefulness in some cases.

However I do prefer the R data.table model, which is what you descibe. You can set an index on one or more columns in the table, and that's that.

reply

---

timClicks 3 days ago [-]

I too would welcome a friendlier pandas library, but every time I've tried to think of an API that would work I fail. Well actually, I keep on wanting pandas to understand SQL.

reply

lars512 3 days ago [-]

There’s a pandasql library which lets you execute SQL on dataframes. It’s a little slower because it needs to serialize via SQLite, but it’s a quick way to get going.

https://pypi.python.org/pypi/pandasql

reply

chasedehan 3 days ago [-]

I will definitely echo (2). dplyr is amazing and works in far, far fewer lines of code than pandas. That was my largest issue when migrating over from R.

There is dplython, but it doesn't quite work the same so I don't use it much. https://github.com/dodger487/dplython

reply

has2k1 3 days ago [-]

I created plydata, you may find it sufficient for your needs.

https://github.com/has2k1/plydata

reply

chasedehan 2 days ago [-]

Is the only difference the placeholder X? I was running into issues with dplython in executing arbitrary functions outside the tidyverse. Can your package handle situations such as this?

df %>% select(var1, var2) %>% rbind(df2) %>% na.omit()

etc? That was the big benefit I saw from using dplyr.`

reply

has2k1 2 days ago [-]

Yes, no X placeholder. And at the moment you cannot pipe to arbitrary functions, Python limitations. I'll get around this by providing a helper function e.g

df >> call(pd.dropna, axis=1)

reply

---

chasedehan 3 days ago [-]

> pandas is an amazing library, the best at exploratory work,

I will add ... "in python." That is definitely true, but to call it "the best at exploratory work" is not accurate. I might be opening up a completely separate debate, but for down and dirty exploratory work nothing beats R's dplyr and ggplot.

...

newen 2 days ago [-]

You mean R's data.table and ggplot ;)

reply

---

the pandas guy says:

" To the outside eye, the projects I've invested in may seem only tangentially-related: e.g. pandas, Badger, Ibis, Arrow, Feather, Parquet. Quite the contrary, they are all closely-interrelated components of a continuous arc of work I started almost 10 years ago. " -- http://wesmckinney.com/blog/apache-arrow-pandas-internals/

" Arrow's C++ implementation provides essential in-memory analytics infrastructure for projects like pandas:

    A runtime column-oriented memory format optimized for analytical processing performance
    A zero-copy, streaming / chunk-oriented data layer designed for moving and accessing large datasets at maximum speeds
    Extensible type metadata for describing a wide variety of flat and nested data types occurring in real-world systems, with support for user-defined types

What's missing from the Arrow C++ project at the moment (but not for too much longer) is:

    A comprehensive analytical function "kernel" library
    Logical operator graphs for graph dataflow-style execution (think TensorFlow or PyTorch, but for data frames)
    A multicore schedular for parallel evaluation of operator graphs"

---

http://docs.ibis-project.org/generated-notebooks/1.html

---

ideally there whould be a 'curses' library in Oot. However, ncurses (and its predecessor 'curses') is mainly a Unix-y thing and is not as easily ported to Windows (eg Python ships with its library 'curses' (which is actually ncurses i think) only on Unix-y systems, although there are abandoned third-party alternatives that you can get that claim to work on Windows).

But really all we need is a small subset of primitives:

---

x-platform get keypress in Python:

non-blocking: https://stackoverflow.com/questions/5044073/python-cross-platform-listening-for-keypresses

blocking: https://stackoverflow.com/questions/510357/python-read-a-single-character-from-the-user

---

this should be more obvious. In Python, a round trip from a POSIX timestamp (seconds since Unix epoch) through datetime involves two different libraries and three functions:

import calendar, datetime

  1. Convert a unix time u to a datetime object d, and vice versa def dt(u): return datetime.datetime.utcfromtimestamp(u) def ut(d): return calendar.timegm(d.timetuple())

-- https://stackoverflow.com/questions/13260863/convert-a-unixtime-to-a-datetime-object-and-back-again-pair-of-time-conversion

---

Java 'curses'-like thing

" Lanterna screenshot

Lanterna is a Java library allowing you to write easy semi-graphical user interfaces in a text-only environment, very similar to the C library curses but with more functionality. Lanterna is supporting xterm compatible terminals and terminal emulators such as konsole, gnome-terminal, putty, xterm and many more. One of the main benefits of lanterna is that it's not dependent on any native library but runs 100% in pure Java.

Also, when running Lanterna on computers with a graphical environment (such as Windows or Xorg), a bundled terminal emulator written in Swing will be used rather than standard output. This way, you can develop as usual from your IDE (most of them doesn't support ANSI control characters in their output window) and then deploy to your headless server without changing any code.

Lanterna is structured into three layers, each built on top of the other and you can easily choose which one fits your needs best.

    The first is a low level terminal interface which gives you the most basic control of the terminal text area. You can move around the cursor and enable special modifiers for characters put to the screen. You will find these classes in package com.googlecode.lanterna.terminal.
    The second level is a full screen buffer, the whole text screen in memory and allowing you to write to this before flushing the changes to the actual terminal. This makes writing to the terminal screen similar to modifying a bitmap. You will find these classes in package com.googlecode.lanterna.screen.
    The third level is a full GUI toolkit with windows, buttons, labels and some other components. It's using a very simple window management system (basically all windows are modal) that is quick and easy to use. You will find these classes in package com.googlecode.lanterna.gui2."

---

a lot of good technical stuff about how a 'curses-like' thingee works here:

https://github.com/mabe02/lanterna/blob/master/docs/introduction.md

https://web.archive.org/web/20171017050911/https://github.com/mabe02/lanterna/blob/master/docs/introduction.md

https://github.com/mabe02/lanterna/blob/master/docs/using-terminal.md

some more technical details about ncurses are in http://invisible-island.net/ncurses/ncurses.faq.html

we should probably have a library that works a lot like the lowest of Laterna's 3 levels (documented in [5])

---

here's another ncurses-like thingee, but a little more widget-y:

https://github.com/chjj/blessed

---

rurban 107 days ago [-]

Well, amongst Lispers we do argument a lot about the missing or overarchitectured bits. The ffi, threads, the conditions system, the MOP, ... But only the various FFI's pose a minor problem. It's complaining on a very high level.

daly 101 days ago [-]

Probably the most difficult area in the CL standard is handling pathnames. There are so many possible pitfalls. FFI is an area that, despite best efforts, is likely not a good candidate for inclusion in a standard. It would be better to have all of the implementations converge on a convention since the underlying foreign systems will change over time. Not everything that "just works" has to be in a standard.

C++ issues a "new standard" every 3 years or so which is just nonsense. Even Stroustrop said that the "new C++" is not the "old C++". They are adding ideas (e.g. Concepts) that virtually no one is asking for or knows how to use. Ten years from now that C++ maintenance job you have will be a nightmare. PL/I tried to be everything to everyone for every task and eventually disappeared. C++ is on the same long-term death march. You heard it here first.

---

Clojure has a few interesting features which are well integrated with each other, but it also comes with a lot of opinionated choices one might not like. Another thing - not a language issue per-se - is that relying on Java ecosystem can be painful (and ClojureScript?, fortunately finally bootstrapped a while back, is not quite the same language).

Personally, I feel that, in pursuit of simplicity, Clojure dropped a lot of useful features. As an example, in both CL and Racket, you have 2-3 times more ways of structuring your code than in Clojure. This is important, because different parts of a codebase benefit from different structuring mechanisms - and the more choice you have, the easier it is to find the right one. In Clojure, all you have are maps and defrecord, with the latter being almost never used.

Of course, you have macros to combat this, but then you're put in the shoes of language inventor and implementer, which is a lot more complexity than most programmers would like to fight just to be able to order the function definition the way they want.

---

" between pandas, list comprehensions, python collections library, sklearn, spyder, I feel I have a lot of power at my finger tips and its easy to do most of the machine learning I want. "

---

minimaxir 130 days ago [-]

I had a similar story. I used R for statistics at college, but only base R, and it is verbose even for basic data manipulation. The scripts I made for my older data blog posts are effectively incomprehensible to me now.

I ended up learning how to use Python/pandas/IPython because I had had enough and wanted a second option on how to do data analysis.

Then the R package dplyr was released in 2013, alleviating most annoyances I had with R. dplyr/ggplot2 alone are strong reasons to stick with the R ecosystem. (not that Python is bad/worse; as I mention at the post, both ecosystems are worth knowing)

newen 130 days ago [-]

Same story. I use data.table and ggplot2, with a couple of dplyr functions, for pretty much all of my plotting and analysis now.

bsg75 130 days ago [-]

I use both, but R for interactive analysis and reporting, Python for data transformations (ETL).

While the syntax of Python is "cleaner" for backend scripts, R feels more straightforward when working with dataframes (dplyr) resulting in things to report on. The syntax for ggplot2 fits the same category.

As much as having one languages for both categories would be nice, using both today seems like a better option.

paultopia 130 days ago [-]

The thing that makes me sad about R is textmining. TM makes me sad, strings-as-factors makes me very sad. But maybe I'll try tidytext...

disgruntledphd2 129 days ago [-]

Yeah, Python is way, way better for text. And I say that as a long-time R user. R really doesn't like things that can't be represented as datasets.

---

https://en.m.wikipedia.org/wiki/BIOS_interrupt_call

---

dcosson 1 day ago [-]

Totally a tangent but, don’t use moment.js! It inherits some of the parsing and other quirks from the awfulness that is the JS Date object, the mutability almost certainly will lead to being bit by a couple of bugs before you actually internalize it, and the time zone support is kind of tacked on as an afterthought.

I highly recommend js-joda, particularly if you’ll ever be computing/showing things to your users in different time zones. It actually treats dates and times rigorously, and has an api that makes it clear what kinds of operations make sense to do on a zoned vs local datetime and makes things like converting between timezones vs transposing them explicit and simple.

reply

armandososa 13 hours ago [-]

Thank you for this comment. I've been suffering for JS Date quirks for a long time, and haven't found a solution that alleviates the issues I've been having. Looks like js-joda is exactly what I need.

Link for the lazy: https://github.com/js-joda/js-joda

reply

always_good 1 day ago [-]

I haven't tried it, but https://date-fns.org/ looks nice. For example, immutable API and just uses native Date objects.

reply

dcosson 1 day ago [-]

Yeah it looks nicely simple & modular, but it probably works best in Node.js since your servers can all be in UTC.

In general I see the appeal of using a small shim around a standard library thing rather than re-implementing something totally new, but JS Date is bad enough that you're better off staying away altogether. It's just hard to use correctly since there's no "timezone unaware" object available and it always assumes the local timezone, so users' browsers in different timezones treat them differently. Lots of seemingly simple things (e.g. a time + timezone input picker) are easy to mess up because you end up accidentally implicitly converting things to the local time.

reply

---

" The problem with malloc() is not only that it can lead to heap fragmentation and a failure of the program at runtime. In many libraries, it is not reentrant or thread-safe (see https://stackoverflow.com/questions/855763/is-malloc-thread-safe). Using malloc() from a thread or from an interrupt is most likely a very dangerous thing! "

---

great read, toread:

http://www.nadler.com/embedded/newlibAndFreeRTOS.html

---

some C libraries for embedded:

apparently the frontrunners are 'newlib' and its smaller derivative, 'newlib-nano', for GCC ARM toolchains. Note that both of them apparently call malloc() inside printf and maybe elsewhere.

https://github.com/32bitmicro/newlib-nano-1.0

https://github.com/iperry/newlib-nano-2.1 http://www.ti.com/tool/msp430-gcc-opensource http://www.nongnu.org/avr-libc/

" Differences between Newlib and Newlib-Nano include:

    Newlib-Nano is optimized for size.
    The printf and scanf family of routines have been re-implemented in Newlib-Nano to remove a direct dependency on the floating-point input/output handling code. Projects that need to handle floating-point values using these functions must now explicitly request the feature during linking, as described above.
    The printf and scanf family of routines in Newlib-Nano support only conversion specifiers defined in C89 standard. This provides a good balance between small memory footprint and full feature formatted input/output.
    Newlib-Nano removes the now redundant integer-only implementations of the printf/scanf family of routines (iprintf/iscanf, etc).  These functions now alias the standard routines.
    In Newlib-Nano, only unwritten buffered data is flushed on exit.  Open streams are not closed.
    In Newlib-Nano, the dynamic memory allocator has been re-implemented

...

Q: What is the difference between newlib and newlib-nano ?

A: Recently ARM decided to make two versions of the library: one that would really work in Linux without the need to tweak it, and a smaller one optimized for "small" embedded applications. They chose to modify the existing one to make it more easy to use with Linux, and to create a new one, which they called "nano" for small embedded apps. So the standard one became larger but the new one is much smaller (faster, uses less stack, etc.). You can optionally disable floating-point support from it for even more performance (if you don't use floating-point of course).

Historically, the GCC and its libraries were originally designed for producing applications that operate in Unix-like OSes. Until recently the library in the ARM port of GCC was still oriented this way, although not completely finished. As a result those who actually tried to use it in Unix-like OSes (Linux) in ARM applications had to tweak it before it worked, while those who did not use Linux (most embedded applications) were losing a lot of code and execution time, wasted by compatibility with an absent OS. "

...

Newlib, the C library in the toolchain, implements printf functions that are so complicated they require about 37K bytes of FLASH and 5K bytes of RAM to run a simple hello-world program. That's far too large for MCU programming where you might need printf functionality for debugging and logging purposes. The good news is that there is plenty of unnecessary "fat" in libraries that can be cut.

The diet plan for libraries is to cut the unnecessary features, re-implement features with simpler logic, and build while optimizing for size. It results in a set of new libraries called newlib-nano. Namely based on newlib, but with a much smaller size.

Newlib-nano cuts some features that were added after C89, which are believed to be rarely used in MCU programming. By limiting the format converter to the C89 standard, format string processing code in printf is greatly reduced. By removing the iov buffering, all IO function sizes are again significantly reduced. Removal of wide char support in non-wide char function further squeezes string IO logic. Newlib-nano also extensively uses the weak symbol technique to exclude features that are rarely used in normal MCU programs. For example, referencing floating point IO and atexit as weak functions dramatically cuts the size of printf() and exit().

Newlib-nano also re-implements memory allocation functions, to replace the original ones that have overall better performance but with lots of complex logic which increases code size. The so called nano-allocator uses simple and native algorithms to handle allocation, de-allocation, and defragmentation. It works effectively when the total memory that can be allocated is small. More importantly, it is only about one sixth of the original size.

Newlib-nano is built with optimization level Os. This results in smaller memcpy and memset because newlib chooses a simple version of these functions when it finds them built with Os. It also discards some optimizations in C++ libraries that are large. An additional build flag for newlib-nano is -fno-exception, which disables the exception handling of libraries. This is acknowledged to be acceptable by some MCU C++ developers.

To summarize, the newlib-nano can cut the size of hello-world programs by around 80%. In extreme cases for C++ programs, the size reduction could exceed 90%.

It is easy to use newlib-nano in real projects with GCC ARM Embedded 4.7. Normally, it is only necessary to specify one additional linker option. Driver specifications in the toolchain will link with newlib-nano libraries instead of normal libraries.

"

so newlib sounds even better than musl and dietlibc, and newlib-nano sounds even better:

[6] : "Bloat comparison musl uClibc dietlibc glibc Complete .a set 426k 500k 120k 2.0M † Complete .so set 527k 560k 185k 7.9M † Smallest static C program 1.8k 5k 0.2k 662k Static hello (using printf) 13k 70k 6k 662k Dynamic overhead (min. dirty) 20k 40k 40k 48k Static overhead (min. dirty) 8k 12k 8k 28k Static stdio overhead (min. dirty) 8k 24k 16k 36k Configurable featureset no yes minimal minimal "

---

list of syscalls required for Newlib:

https://sourceware.org/newlib/libc.html#Syscalls

---

http://wiki.osdev.org/C_Library claims that with PDClib "10 (plus one optional) required syscalls need to be implemented "

---

" The point is that PDCLib was designed for generic OSDev work, in a way that few other libraries are likely to be. It is probably your best chance at getting what you are looking for.

Lacking a PDCLib discussion forum, I'll take the opportunity to talk about it right here, as I consider any perceived shortcomings of PDCLib in that particular role to be design defects.

One thing to note about PDCLib. When I passed the project to Owen, he pushed PDCLib towards a point where it was working for him (gandr). This included, aside from several bugfixes:

    time.h
    uchar.h
    threads.h
    wchar.h
    wctype.h
    manpages
    pthreads integration
    dlmalloc integration
    Jam build instead of Makefile

This work was commendable, and I am thankful Owen did it, as it showed the base work I did was scaling to production work. But when I picked up interest in the project again, I found some of those additions to be not quite up to the standards I had set myself in my original work. I also found it somewhat hard to turn them into that direction without ripping things apart. (Some things that should have been strictly optional were not, really.)

So I did set up a branch in the repository, called "retrace". It branches at the point I had left the project, and attempts to "retrace" the improvements and fixes Owen applied, piecemeal, while keeping "my" structure intact. I looked at Owen's "default" branch for inspiration, but added things "my way".

You get that branch via "hg update -rretrace".

Give it a try. It is not as advanced as Owen's work, but I have backported the bugfixes to the existing code, and added a time.h implementation of my own; but there is no hint of dlmalloc(), threads, or wide character support yet (as other things in my life took over).

Instead of dlmalloc, there is a really primitive "placeholder" function, allocpages(), that you could plug into your kernel specifics to "just get it working".

From what I understood, you are looking for a "no frills" library, and PDCLib "retrace" is probably closer to that than PDCLib "default".

I might get back to working on that PDCLib branch at some point in the future. While I have completely dropped out of OSDev work (for quite some time now), I quite enjoy tinkering with this project from time to time when I am bored. I just am no longer bored as much as I was back when I started PDCLib, or when I picked it up again this year...

_________________ Every good solution is obvious once you've found it. "

---

trying to identify the 10 syscalls in

https://bitbucket.org/pdclib/pdclib/src/c8dc861df697/platform/posix/?at=default

'externs':

fork execve wait _exit unlink link environ (i guess this is not a fn call though?)

also in files in there we also see the following external functions being used:

mmap munmap open read write lseek close signal raise

and these, but i don't think they are syscalls: strncmp strlen

also note that file '/dev/urandom' is used (in 'tmpfile')

also there's a bunch of constants, such as stdin, stdout, stderr, in https://bitbucket.org/pdclib/pdclib/src/c8dc861df697a6c8bddbcbf331d9b6fcae6e2f4d/platform/posix/functions/_PDCLIB/_PDCLIB_stdinit.c?at=default&fileviewer=file-view-default

and a few more constants in https://bitbucket.org/pdclib/pdclib/src/c8dc861df697a6c8bddbcbf331d9b6fcae6e2f4d/platform/posix/includes/signal.h?at=default&fileviewer=file-view-default

and a few more in https://bitbucket.org/pdclib/pdclib/src/c8dc861df697a6c8bddbcbf331d9b6fcae6e2f4d/platform/posix/internals/_PDCLIB_config.h?at=default&fileviewer=file-view-default

---

porting newlib:

http://www.embecosm.com/appnotes/ean9/ean9-howto-newlib-1.0.html

---

http://www.embecosm.com/appnotes/ean9/ean9-howto-newlib-1.0.html mentions the following syscalls:

"...requires an implementation of eighteen system calls and the definition of one global data structure..."

5.3. Standard System Call Implementations

    5.3.1. Error Handling
    5.3.2. The Global Environment, environ
    5.3.3. Exit a program, _exit
    5.3.4. Closing a file, close
    5.3.5. Transfer Control to a New Process, execve
    5.3.6. Create a new process, fork
    5.3.7. Provide the Status of an Open File, fstat
    5.3.8. Get the Current Process ID, getpid
    5.3.9. Determine the Nature of a Stream, isatty
    5.3.10. Send a Signal, kill
    5.3.11. Rename an existing file, link
    5.3.12. Set Position in a File, lseek
    5.3.13. Open a file, open
    5.3.14. Read from a File, read
    5.3.15. Allocate more Heap, sbrk
    5.3.16. Status of a File (by Name), stat
    5.3.17. Provide Process Timing Information, times
    5.3.18. Remove a File's Directory Entry, unlink
    5.3.19. Wait for a Child Process, wait
    5.3.20. Write to a File, write

---

so comparing the PDClib's platform-specific layer and newlib, the following are in the intersection:

fork execve wait _exit unlink link environ open read write lseek close raise ('kill' is used in newlib to send a signal) memory allocation (sbrk in newlib, mmap in PDClib?)

additional functionality only in newlib: fstat getpid isatty stat times

additional functionality only in PDClib: memory deallocation (munmap) signal handler setup (signal) /dev/urandom

---

erickt 935 days ago [-]

No it's not a stupid question. libc (or CRT on windows) really is the library that exposes all the user space system libraries. It contains the functions to do IO, sockets, threads, and etc. So we use it to expose that functionality to rust users.

Now there are some languages, namely Go, that skip libc and just implement directly against the syscall interface. Go has the advantage of being able to draw from Google's vast experience interacting deep within the system, so it was comparatively cheap for them to do this.

For rust, it never really felt like it was worth the effort for the benefit we'd get out of it. It was more important to get the language done.

barrkel 935 days ago [-]

Programming languages on Windows other than C and C++ typically link with kernel32.dll and so on, and not a C runtime. This gives a stable interface at a slightly higher level of abstraction than syscalls. Relying on libc is more a Unixism; without duplicating a bunch of the work in libc, you simply can't use many system services in ways the end user expects - network name resolution comes to mind (NSS), but other things, like threads, don't have portable standards at a lower level.

barosl 933 days ago [-]

There have been some efforts to make Rust utilize the native Windows API instead of libc on Windows. While this is ongoing, the lack of experienced Windows developers participating the Rust project has been slowing down the progress. We'll be very happy to get your contribution!

maxerickson 934 days ago [-]

Python links the VC runtime (so not even the system CRT).

MinGW? links the system CRT by default, which is one reason that it can be fussy to compile Python extensions with it.

(I'm more curious about what a survey would reveal than I am trying to argue with you about what you said)

---

here are some non-portable functions in various alternate versions of Windows libc:

https://docs.microsoft.com/en-us/cpp/cppcx/crt-functions-not-supported-in-universal-windows-platform-apps

do we really want to follow their lead there? There is no stdin,stdout; there is no 'system' or 'execve' to execute other applications; there are no environment variables; there are no pipes.

hmm yeah maybe we should follow their lead here, actually, for Boot if not for Oot; Boot is supposed to be minimalistic.

---

" for bare metal, there is no underlying OS. Is there a standard related to how a c library should be implemented ... Generally you do without. Why would you need such things without an operating system to support them? memcpy and such sure. File systems, not necessarily, although implemented fopen, close, etc is trivial against ram for example. printf() is very very very heavy, tons and tons of code required, do without. any I/O replace or do without. newlib is pretty extreme, but does help if you cant do without, but you have to implement the system on the backend anyway, so do you need the extra layer? – old_timer Mar 22 '16 at 2:36 ... With Newlib, there is a layer below it called "libgloss" which contains (or you write) a couple of dozen functions for your platform. For example, a getchar and putchar which know about your hardware's UART; then Newlib layers printf on top of these. File I/O will similarly rely on a few primitives. – Brian Drummond Mar 21 '16 at 23:02 ... yeah, i didn't read pipe's 2nd paragraph carefully. besides dealing with stdin and stdout and stderr (which takes care of putchar() and getchar()) which directs I/O from/to a UART, if your platform has file storage, like with a flash, then you have to write glue for that also. and you have to have the means to malloc() and free(). i think if you take care of those issues, you can pretty much run portable C in your embedded target (no argv nor argc). – robert bristow-johnson Mar 21 '16 at 23:27 ... fopen and fclose and malloc for that matter ARE system calls they imply a system, file system, memory management, etc. memcpy, strcpy, are not... "

---

First off, the C standard defines something called a "freestanding" implementation, as opposed to a "hosted" implementation (which is what most of us are familiar with, the full range of C functions supported by the underlying OS).

A "freestanding" implementation needs to define only a subset of the C library headers, namely those that do not require support, or even the definition of functions (they merely do #defines and typedefs):

    <float.h>
    <iso646.h>
    <limits.h>
    <stdalign.h>
    <stdarg.h>
    <stdbool.h>
    <stddef.h>
    <stdint.h>
    <stdnoreturn.h>

When you're taking the next step toward a hosted implementation, you will find that there are only very few functions that really need to interface "the system" in any way, with the rest of the library being implementable on top of those "primitives". In implementing the PDCLib, I made some effort to isolate them in a separate subdirectory for easy identification when porting the lib to a new platform (examples for the Linux port in parenthesis):

    getenv() (extern char * * environ)
    system() (fork() / execve() / wait())
    malloc() and free() (brk() / sbrk())
    _Exit() (_exit())
    time() (not implemented yet)

And for <stdio.h> (arguably the most "OS-involved" of the C99 headers):

    some way to open a file (open())
    some way to close it (close())
    some way to remove it (unlink())
    some way to rename it (link() / unlink())
    some way to write to it (write())
    some way to read from it (read())
    some way to reposition within it (lseek())

Certain details of the library are optional, with the standard merely offering them to be implemented in a standard way but not making such an implementation a requirement.

    The time() function may legally just return (time_t)-1 if no time-keeping mechanics are available.
    The signal handlers described for <signal.h> need not be invoked by anything other than a call to raise(), there is no requirement that the system actually sends something like SIGSEGV to the application.
    The C11 header <threads.h>, which is (for obvious reasons) very dependent on the OS, need not be provided at all if the implementation defines __STDC_NO_THREADS__...

There are more examples, but I don't have them at hand right now.

The rest of the library can be implemented without any help from the environment.(*)

(*)Caveat: The PDCLib implementation is not complete yet, so I might have overlooked a thing or two. ;-)

---

another subset of libc:

https://github.com/alexfru/SmallerC/wiki/Smaller-C-Standard-Library-Wiki https://github.com/alexfru/SmallerC/tree/2463a83c7179bfb9440016d5fe6ce6732e89fb4b/v0100/srclib

---

more on C standard library (libc) implementations:

" distinguish between `Freestanding functions' and `Hosted functions' (this is not an entirely accurate use of the terms, but bear with me) in standard libraries. For instance, the function strlen is completely standalone, while a function like printf relies on an stdout object, which in return relies on the concept of a standard output device. As such, you can split a standard library like libc into two sets of functions: The functions that are freestanding and can be used in a kernel without issue, and functions that directly or indirectly relies on system calls or other hosted services. "


" First, a summary:

    Software Interrupts: Available on all CPUs. Smallest instruction (2 bytes).
    Call Gates: Available on all CPUs. A little faster than software interrupts. Larger instruction (7 bytes). A little messier to implement.
    Exceptions: Available on all CPUs. Typically slower and messier depending on which exception you use. For example, you could use the "UD2" instruction and detect it within the undefined opcode handler, but then there's an additional "was it a system call" test. For "busier" exception handlers it can be worse.
    SYSENTER/SYSEXIT: Available on Intel Pentium II and later Intel CPUs (including in long mode). Also supported by recent AMD CPUs (but not in long mode). Fast and messy because GDT entries need to be in order, return information (EIP, ESP) is not saved during SYSENTER, segment limits and base addresses for CS and SS are trashed, and interrupts are disabled on SYSENTER but not restored on SYSEXIT.
    SYSCALL/SYSRET: Available on AMD K6 and later AMD CPUs. Supported by Intel chips for 64 bit code only (i.e. 16 bit or 32 bit "compatability" code running on a 64 bit Intel CPU in long mode can not use SYSCALL). Fast and messy because GDT entries need to be in order, return EIP is saved in ECX (or RCX), EFLAGS is changed (different actions for 64 bit code and 16/32 bit code), ESP not changed on call or return, and segment limits and base addresses for CS and SS are trashed."

---

" [*]wide character library support in <wchar.h> and <wctype.h> (library issue, missing support) [*]complex (and imaginary) support in <complex.h> (broken) [*]extended identifiers (missing) [*]extended integer types in <stdint.h> (missing) [*]treatment of error conditions by math library functions (math_errhandling) (library issue, missing) [*]IEC 60559 (also known as IEC 559 or IEEE arithmetic) support (broken) [*]additional predefined macro names (missing) [*]standard pragmas (missing) "

---

I use newlib on my Cortex_M3 with 32kB RAM, and to eliminate the malloc() you can use siprintf() or sniprintf().

Pro: No more calls to malloc().

Con: It does not suport formatting float and double, and is not really portable this way. shareeditflag

answered Feb 10 '11 at 11:29 Turbo J 5,74911527

add a comment up vote 5 down vote

If you use newlib and do not implement the sbrk syscall, then any function you use that requires malloc will generate a linker error, which will prevent you from inadvertently using a call that requires dynamic memory . So I would suggest that you do that, and then simply avoid those functions that cause the linker error. You can modify or override any library functions you do not wish to use. shareeditflag

answered Feb 12 '11 at 0:19 Clifford 50.7k748103

add a comment up vote 3 down vote

printf() is not good for small embedded realtime systems!

Actually it is worse than malloc in many ways. Variable argument lists, very complex formatting, float number support when you don't need it etc etc. printf() comes with an enormous overhead, and the compiler will not be able to reduce it, as every parameter passed to it is evaluated in runtime.

printf() is perhaps ok for hobbyists and beginners still learning C. But if you are a professional programmer, you really ought to write your own serial monitor / LCD routines. You will dramatically improve the program performance and flash consumption. shareeditflag

 There is nothing wrong with using printf in embedded systems. There's something wrong with printf implementations that call malloc and do all sorts of useless things. A simple printf implementation without floating point (or that ignores exactness issues when printing floating point) and without POSIX i18n %n$ argument specifiers, can be implemented in about 2k of code, and allows the calling application to be much smaller and simpler than if it had to duplicate printf-like functionality all over the place. – R.. Feb 7 '11 at 19:00
 "There is nothing wrong with printf..." /--" "...can be implemented in about 2k of code". You obviously haven't worked with small microcontroller applications. 2k is huge. And if you can't even write a simple RS-232 serial monitor by yourself to replace printf(), you shouldn't be programming embedded systems in the first place. – Lundin Feb 7 '11 at 21:00

... It is an answer. For a small embedded realtime system, you shouldn't be using libraries stdio and stdlib in the first place! MISRA-C bans stdio entirely, for example. – Lundin Feb 7 '11 at 14:18

 As for malloc(), I agree completely with everything stated. Apart from the issues with fragmentation and memory leaks, you would also need to allocate a heap in RAM (likely several kb), and in these kind of systems you rarely have enough read/write data to justify it. – Lundin Feb 7 '11 at 14:45
 By the way, if you omit wide character support or define wchar_t as char so that the l modifier can be ignored, and omit floating point, and don't tweak anything for performance, I bet you can get printf a lot smaller than 2k. Maybe under 0.5k even. – R.. Jun 23 '13 at 2:33 

---

I had similar needs and found that klibc fit it quite well. The only downside (for commercial use) is that the distribution includes a few GPL-licensed files, even though most of it is BSD-licensed. I have hacked a minimal version of it here.

This is even more limited than PDCLib, and suitable if you just need a few basic functions such as printf and strtok. Compiles to just 4kB with all functions included. shareeditflag

edited Dec 2 '16 at 6:08

answered Sep 9 '11 at 20:21

Baselibc

This is a very simple libc for embedded systems. Mainly geared for 32-bit microcontrollers in the 10-100kB memory range. The library compiles to less than 5kB total on Cortex-M3, and much less if some functions aren't used.

The code is based on klibc and tinyprintf modules, and licensed under the BSD license.

asprintf.c atoi.c atol.c atoll.c bsearch.c bzero.c calloc.c fgets.c inline.c jrand48.c lrand48.c malloc.c malloc.h memccpy.c memchr.c memcmp.c memcpy.c memfile.c memmem.c memmove.c memrchr.c memset.c memswap.c mrand48.c nrand48.c qsort.c realloc.c sprintf.c srand48.c sscanf.c strcasecmp.c strcat.c strchr.c strcmp.c strcpy.c strcspn.c strdup.c strlcat.c strlcpy.c strlen.c strncasecmp.c strncat.c strncmp.c strncpy.c strndup.c strnlen.c strntoimax.c strntoumax.c strpbrk.c strrchr.c strsep.c strspn.c strstr.c strtoimax.c strtok.c strtok_r strtol.c strtoll.c strtoul.c strtoull.c strtoumax.c tinyprintf.c vasprintf.c vprintf.c vsprintf.c vsscanf.c

---

smallclib (MIPS):

ftp://ftp.trace32.com/Education/CodeScape/Documentation/Toolchain/MIPS_Toolchain_Small_C_Library_Reference_Manual_1.3.10.pdf

"The sources are published on very liberal licence (BSD-like) and available after toolchain's installation in Toolchains/mips-mti-elf/2015.06-05/src/smallclib." [7]

"Having successfully ported smallclib, I have to admit I like it more than pdclib." [8]

" 2. SmallCLib? The goal of SmallCLib? is to provide as much functionality as possible in a small amount of space, and it is intended primarily for embedded use . The standalone library containing size optimized versions of functions is usable from the MIPS bare metal toolchain. The SmallCLib? comes in two variants: 2.1. ISO conforming The ISO C - 99 conforming version of SmallCLib?, referred to as the small variant, does not omit functionality that's required by the standards in order to achieve a smaller library . Most of the space saving in this version comes from aggressive refactoring of the code to eliminate redundancy and some of the space saving comes at the cost of performance. This version provides an IEEE 754 compliant software floating point Math library. 2.2. Non - conforming This version, referred to as the tiny - variant, omits some ISO features in order to achieve higher code density as compared to the compliant versi on. Following sections describe differences between ISO conforming and non - conforming versions. 2.2.1. File and standard IO The file and standard IO functions provided by this version operate only on standard streams (stdin, stdout and stderr). All the supported streams are un - buffered. 2.2.2. Locales Locale support in this version is limited to UTF - 8 encoding only, which allows direct code paths and better performance . The only locale supported by this version is the default C.UTF - 8 locale. 2.2.3. Floating point support The fl oating point support in this version differs from IEEE 754 standard in the following ways,  NaN?, INF and de - normal input values are not handled. Operations involving such inputs generate unpredictable results.  Sign of zero is ignored  No IEEE exceptions ar e flagged 2.2.4. Reentrancy This version of SmallCLib? does not support reentrancy. ...

This table summarizes the differences between newlib and the size-optimized variants. The following sections provide details about specific function behaviour.

Newlib SmallClib? TinyClib?

Complex arithmetic: Yes No No

Character handling and Localization: Full locale support Full locale support Only 7-bit ASCII(default C) locale support

Math: IEEE754 compliant, ISO C99 compliant IEEE754 compliant, ISO C99 compliant No support for denormals Infinity, NaN? arguments

Extended multi-byte and wide character utilities: Yes Yes No

Reentrancy: Yes Not supported Not supported " [9]

---

" I was not aware of _format_parser_int. It makes perfect sense to use it instead of the generic _format_parser. ...

then we would no longer need to pull math functions like frexp and fpclassify as well as float operations __umodi and __udivi (which removes the need to link with libgcc), because it is only the format parser that requires them. ... Firstly, I'd like not to use sprintf inside kernel, but rather snprintf to avoid buffer overflow errors. ... It's perferable to get rid of all floating point functions in the kernel. This will make context switches easier as CP1 registers AFAIK are not shadowed. ...

    Anyway, the current subset of files picked for compilation is the absolute minimum that is required to use kprintf, sprintf and strlen.

Let's also include memcpy. "

---

on PDclib in a MIPS OS kernel project:

https://github.com/cahirwpz/mimiker/issues/3

(excerpted/paraphrased)

"

---

" It is easier to pick some functions from smallclib and ignore others, apparently the dependencies between functions are kept to minimum. I chose to build *printf functions, most useful string\memory functions, is* functions and some useful utilities (eg. strerror, atoi, setjmp), but expanding this selection will be very easy if we need to use others. The glue layer is very thin! Using the aforementioned subset of smallclib only had to implement two functions (write and lseek). Compare that with more than 20 files needed to correctly link pdclib. "

---

test of a subset of smallclib:

"

    // Simple printf.
    printf("=========================\nPrintf is working!\n");
    // Both stdout and stderr write bytes to UART1.
    fprintf(stderr,"Stderr working too!\n");
    // Formatting
    printf("This is a number: %d - %x\n", 123456, 123456);
    // String rendering
    const char* stringA = "This is an example string!";
    char stringB[100];
    sprintf(stringB, "Copied example: %s",stringA);
    puts(stringB);
    // String functions:
    printf("Above text has length %zu.\n", strlen(stringB));
    printf("Word: \"example\" is at: %ld.\n", strstr(stringB,"example") - stringB);
    memset(stringB,'a',20);
    puts(stringB);
    // The limits are defined
    printf("INT_MAX is %d\n", INT_MAX);

The size of .text section for the entire kernel with this example: 24kB (without size optimisations enabled). A half of that size (12kB) was taken up by the sprintf's format interpreter. "

---

list of POSIX 2008 headers:

http://pubs.opengroup.org/onlinepubs/9699919799/idx/head.html

---

" Supported/unsupported system calls:

    The BG/Q compute node kernel does not support all system calls, such as fork(), system(), usleep().
    The list of supported system calls is shown below. Calls not appearing on this list will return an ENOSYS error.
    For complete details, see chapter 5 in the "IBM System Blue Gene Solution: Blue Gene/Q Application Development" redbook.
    NOTE: system calls should not be confused with library calls. For example, statvfs is a library call and is not shown in the table below. It does call the statfs system call, which is supported, and shown in the table below. Another example is gethostname which is a library call and does not appear in this list. 

ftruncate64 futex getcwd getdents getdents64 getgroups getitimer getpid getrlimit getrusage gettid gettimeofday ioctl kill lseek lstat lstat64 mkdir mmap mremap munmap nanosleep open poll prctl pread64 pwrite64 read readlink readv rename rmdir rt_sigaction rt_sigprocmask sched_get_priority_max sched_get_priority_min sched_getaffinity sched_getparam sched_setscheduler sched_yield setitimer setrlimit sigaction signals sigprocmask socketcall stat stat64 statfs statfs64 symlink time times tmwrite truncate truncate64 uid umask uname unlink utime write writev

NOTE: There are over 30 environment variables that affect the behavior of the CNK. See CNK Environment Variables for more information. " -- [10]

---

" In particular, GCC string handling and any floating point library is going to bloat your code.

FreeRTOS? includes a very cut down open source implementation of many string handling functions in a file called printf-stdarg.c. Including this in your project can greatly reduce both the amount of ROM used by the build, and the size of the stack required to be allocated to any task making a string handling library call (sprintf() for example). Note that printf-stdarg.c is open source, but not covered by the FreeRTOS? license. Ensure you are happy with the license conditions stated in the file itself before use. "

---