proj-oot-ootCapabilitiesNotes1

read stdio, write stdout, write stderr

read/write other streams, files

read/write/execute any file under location (note: read privilage is sufficient to import a Oot library, unless it uses extra-Oot extensions) network communications, other sensors/effectors used via special locations

ulimit #spawned threads, amount of memory

see http://lua-users.org/wiki/SandBoxes section "Table of Variables"

note: it's important that "globals" aren't truly global, and can be hidden from function calls upon demand (also must hide even 'hidden' scoped globals)

should also be able to pass in read-only variable references in order to give the program a way to create other 'pipes' into the untrusted code (maybe)

CALLINGFUNCTION

---

--

shall we implement immutable, unaliasable, purity (see ootDataModes) as capabilities, the same as security capabilities?

each value has attached capabilities, which map onto std attributes (see ootAttributes) or custom attributes (e.g. for stuff like Rails permissions). there are operators to remove capabilities.

sometimes you want to allow debug code to do something that you don't want normal code to do, e.g. printing stuff to the console from a pure function. this should be possible if the point of capabilities is safety, but not if it is security. perhaps a concept like 'user groups' could help here? if you want to disallow something for safety, not security, leave the capability on for the DEBUG group, but if the purpose is security, then turn it off there too.

in essence these user groups allow the capabilities to be removed, but then to resurface when the value is passed to the right type of code segment. this seems to remove the need for a facility to pass capabilities around on their own, although maybe we should allow that too, in case i've forgotten something.

also need 'lockable' and 'unaliasable unless locked'

...in general, are capabilities just masking of object attributes (edges)?

in that case i guess there would be some sort of meta attribute that lists which attributes have been masked.. this list itself would be masked unless you are in the 'admin' group

---

" Loopholes in Object-Oriented Programming Languages

Some object-based programming languages (e.g. JavaScript?, Java, and C#) provide ways to access resources in other ways than according to the rules above including the following:

        direct assignment to the instance variables of an object in Java and C#
        direct reflexive inspection of the meta-data of an object in Java and C#
        the pervasive ability to import primitive modules, e.g. java.io.File that enable external effects.

Such use of undeniable authority effectively defeats the security benefits of the Object-capability model. Caja and Joe-E are variants of JavaScript? and Java, respectively, that impose restrictions to eliminate these loopholes. "

-- http://en.wikipedia.org/wiki/Object-capability_model

---

capability access controls on variables to pass read-only references, etc

---

"AppArmor? ("Application Armor") is a Linux kernel security module that allows the system administrator to restrict programs's capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths."

---

part of making oot good for sandboxing is being able to enforce a timeout on any function calls or subprocesses, but in a way that gives the called function or subprocess a chance to respond before the hard cutoff (eg a 'soft timeout' where an exception is raised within the called function/subprocess, followed by a 'hard timeout' where the called function or subprocess is unilaterally terminated)

---

list of capabilities in Linux:

http://man7.org/linux/man-pages/man7/capabilities.7.html

---

david-given 7 hours ago [-]

Roughly put: in a capability based system, if you have a valid handle for a service, then you can use that service. But the only way you can get a valid handle is to ask your parent process for one --- handles are unforgeable. So your parent gets to check that you're legitimate.

...but your parent, in turn, has limited permissions, because the only way it can get a handle is to ask its parent. And when you ask your parent for a filesystem handle, your parent doesn't have to give you the handle it has. Your parent might give you a limited filesystem handle which only allows writes to /tmp.

...and when you, in turn, start a subprocess --- say, an instance of sqlite for use as a database --- you have to give it handles for the things it wants to do on your behalf. But your filesystem handle only allows writes to /tmp. That means that your sqlite instance can only write to /tmp too.

There's more to it than that, because in real life you also want to forward capabilities to other running services as part of RPCs, but you tend to end up with a natural container system with multiple redundant layers of isolation, all running at minimum necessary privilege, and all hardware-mediated.

Another really interesting capability-based OS is Genode:

https://genode.org/about/screenshots

https://genode.org/documentation/general-overview/index

It'll allow you to run complete OS kernels as capability-restricted Genode processes, effectively allowing virtual machines as first-class citizens.

reply

Klathmon 7 hours ago [-]

That's actually really beautifully simple.

Thanks for this explanation, it really helped the idea "click"

reply

naasking 4 hours ago [-]

And for another beautiful convergence: nowadays most software development is done in languages that naturally express capability patterns, namely memory-safe languages. That is, if you have a reference to an object or value, you have the authority to invoke any of the methods on that object or call any functions that accept such a value. So object references are capabilities.

Most such languages only go too far by allowing "ambient authority", whereby any component in your program has the ability to turn a reference to a value that carries little authority, into one that carries significantly more authority. For instance, consider the file open API: you're essentially turning a string, which exposes no unsafe or security-critical methods, into an object that can literally destroy your computer. And literally any library or component you use can open files. It's madness!

To make a memory safe language capability secure, you simply remove all sources of ambient authority. This mainly consists of globals that permits transitive access to mutable state, like file system APIs, global static variables that can reference transitively mutable data, etc.

These insecure APIs are then replaced with equivalent object instances which you can pass around, eg. you can only receive the authority to open a file if you're given an instance to the directory object which contains that file. The entry point of your program would then change to accept various capabilities instead of just string parameters.

reply

catern 3 hours ago [-]

And as an even deeper correspondence: Once your language is memory-safe and capability-secure, you don't even need a kernel, let alone a microkernel. Rather than use hardware address spaces to separate and modularize components (the primary idea behind microkernels), you just pass capabilities around to your components. One component can't crash the whole system, because all it can access is the APIs you pass in to it. If you want to add isolated processes to your OS, just implement actors in your language: They're equivalent.

Of course, you can always have a bug in your APIs that allows an attack or bug to propagate. But that was always the case even without capability-safety. Capabilities can't give you everything for free. :)

reply

nickpsecurity 2 hours ago [-]

JX Operating System does something like this within a JVM. It runs on a microkernel but I think the mediation is app/language level.

reply

monocasa 7 hours ago [-]

So, going back to Win32, it's as if OpenFile? also took a HANDLE that represented your abilities (or capabilities if you will ) within the security model, with the explicit ability to forward these handles (or new handles that represent a subset of the original's capabilities) to other processes if you choose.

reply

jvert 14 minutes ago [-]

In Win32, this handle is called the AccessToken? https://msdn.microsoft.com/en-us/library/windows/desktop/aa3... and the calling thread's current access token is used by OpenFile? to grant or deny the requested access.

reply

---

adrianratnapala 7 hours ago [-]

I'm no Windows expert, so I didn't know that windows handles were used as security primitives.

I thought they were just a bit like file-descriptors or X11 window ids, or indeed pointers. Such handles do have a kind of role in authorization: once a process has convinced the system to give it some resource, then the (handle, processid) pair is all the system needs to check access.

However you typically gain the handle through something like `open()`, i.e. an ACLed request for a named resource. But with true capabilities you just inherit authorisation from some other capability -- possibly one granted to you by a different process.

That said, the difference from existing systems might be small. Namespaces are really useful, and are probably here to stay. But as long as things can be accessed by names, the access will need to be controlled by something like an ACL.

reply

naasking 4 hours ago [-]

> Namespaces are really useful, and are probably here to stay. But as long as things can be accessed by names, the access will need to be controlled by something like an ACL.

Not necessarily. If your system supports first-class namespaces, then you can just build a namespace consisting of only the objects to which a program should have access. No need for any further access control.

A file open dialog from a program is simply the program's request for you to map a file into its namespace.

reply

adrianratnapala 25 minutes ago [-]

Hmm, that sounds a bit awkward -- every capability on the system would, at least conceptually need a separate copy of the pruned name-tree. There might be good ways of implementing that, but it's not obvious.

On the other hand, I can see how capabilities would mix with ACLed namespaces to improve security. Essentially it would define the semantics of things like cgroups and chroot jails.

In fact sometimes I think that mainstream virtualisation is slowly reinventing -- in an incremental way -- what various alternative OSes advocated in a radical way. E.g. Linux cgroups and "one process per container" sounds quite a lot like Plan9's per-process view of the system.

reply

catern 4 hours ago [-]

>Not necessarily. If your system supports first-class namespaces, then you can just build a namespace consisting of only the objects to which a program should have access. No need for any further access control.

Unfortunately this is usually quite heavyweight. In Unix-like systems, and in Plan 9, the canonical way to do this is to implement a filesystem. "Filesystem" is the universal IPC layer. But implementing a filesystem usually takes quite a lot of effort.

Do you know of any systems where this can be done easily enough for it to be used ubiquitously?

One would much prefer to just put together a hierarchy of objects at the language level and automatically expose it... or something like that anyway.

reply

naasking 3 hours ago [-]

> "Filesystem" is the universal IPC layer. But implementing a filesystem usually takes quite a lot of effort.

It needn't be. A file system is just a set of nested hashtables. The complexity of traditional file systems comes from the durable representation, but that isn't necessarily needed for first-class namespaces. You can just serialize and deserialize a namespace as needed on top of an ordinary file system.

> Do you know of any systems where this can be done easily enough for it to be used ubiquitously?

Plan 9 obviously. The Plash capability secure shell [1]. There are probably a couple of others, but not too many overall.

[1] http://www.cs.jhu.edu/~seaborn/plash/plash-orig.html

reply

---

"basic exploit mitigations - including stack cookies, safe unlinking and access permission protection (by means of an MPU)." [1]

---

ridiculous_fish 12 hours ago [-]

In Midori, objects are capabilities. This is possible because type safety is enforced and an object's type cannot be forged.

It's easy to imagine a capabilities system built around closures. For example, a capability to send on a socket could be an unforgeable closure that performs that sending. Are there any systems that work this way?

reply

andrewflnr 11 hours ago [-]

More details on Midori: http://joeduffyblog.com/2015/11/03/blogging-about-midori/ This is the first of several blog posts by one of the authors. I haven't read all of them, but liked what I read so far. Does anyone have other resources with more detail?

reply

---

 ago [+7]

dm319 2 days ago [-]

Can anyone tell me how secure web browsers are on linux generally. I was looking at the pwn2own results and was quite impressed with the edge compromise that got out of the host OS it was running in. However, I didn't see any web browsers on linux being compromised - is that because it is harder, or no-one bothered?

reply

lima 2 days ago [-]

Comparable to Windows. Windows is a high-profile target for web browsing so you're seeing more exploits.

Web browser escapes usually target the operating system kernel in order to break out of the sandbox. Chrome can reduce its attack surface on Linux using seccomp, and while there's a win32 syscall filter on Windows 10, I'm not sure if Chrome uses it. Windows has stronger kernel self protection features than Linux, but also more attack surface.

Chrome uses Linux namespaces (like LXC/Docker/...) for isolation in addition to seccomp-bpf.

Chrome is by far the most secure one right now, but as evidenced by this blog article, Firefox is catching up.

reply

ryuuchin 1 day ago [-]

Chrome disables all win32k syscalls in its renderer/extension/plugin processes (at the very least flash and pdf). This has been the default for a while now.

I believe the specific win32k syscall filter you're referring to (where you can white/black list specific syscalls) is still only accessible to Microsoft applications (Edge uses it).

reply

johncolanduoni 2 days ago [-]

The Chromium sandbox supports the Win32k mitigation, though it’s possible it’s not enabled at runtime.

reply

ryuuchin 1 day ago [-]

Back when grsecurity was still public I would've said that using it with Chrome provided the best security you could get on Linux. Between the defensive mitigations it added through PaX? and all the other little things (like making chroots actually behave like jails) it greatly increased security. This is without even using any MAC system.

I'm not entirely sure how things stack up now. In the past I never had much faith in the vanilla linux kernel.

reply

---

https://www.blackhat.com/docs/eu-17/materials/eu-17-Arnaboldi-Exposing-Hidden-Exploitable-Behaviors-In-Programming-Languages-Using-Differential-Fuzzing-wp.pdf

found and describes vulnerabilities in several interpreted languages

---

linux kernel capabilities:

http://man7.org/linux/man-pages/man7/capabilities.7.html

---

" It's also debatable whether mlock() is a proper way to protect sensitive information. According to POSIX, mlock()ing a page guarantees that it

---

security

[2]

Building Hardware Systems for Information Flow Tracking (2010)

interesting security idea:

(quoted:)

DIFT: Dynamic Information Flow Tracking

DIFT taints data from untrusted sources: Extra tag bit per word marks if untrusted

Propagate taint during program execution

Check for unsafe uses of tainted data

eg Tainted code execution

eg Tainted pointer dereference (code & data)

eg Tainted SQL command

Operating System: Tag Aware; Save/restore tags, Cross-process info flow

Policy granularity: operation type:

Propagate Policy Example: load

Enables 1. Propagate only from source register 2. Propagate only from source address 3.

Propagate only from both sources OR mode or AND mode or XOR mode

Check Policy Example: load

Check Enables 1.Check source register ( If Tag(r1)==1 then security_trap ) 2.Check source address If Tag(M[r1+offset])==1 then security_trap

Both enables may be set simultaneously

Support for checks across multiple tag bits

Performance: <1% slowdown

4 bits/word in registers, caches, memory

12.5% storage overhead

---

[3]

" A key feature of SAFE is that every piece of data, down to the word level, is annotated with a tag representing policies that govern its use. While the tagging mechanism is very general, one partic- ularly interesting use of tags is for representing information-flow control (IFC) policies. For example, an individual record might be tagged “This information should only be seen by principals Alice or Bob ,” a function pointer might be tagged “This code is trusted to work with Carol ’s secrets,” or a string might be tagged “This came from the network and has not been sanitized yet.” Such tags repre- senting IFC policies can involve arbitrary sets of principals, and principals themselves can be dynamically allocated to represent an unbounded number of entities within and outside the system.

...

rather than using just a few “taint bits,” SAFE associates a word-sized tag to every word of data in the machine—both memory and registers. In particular, SAFE tags can be pointers to arbitrary data structures in memory. The inter- pretation of these tags is left entirely to software

...

SAFE’s system software performs process scheduling, stream- based interprocess communication, storage allocation and garbage collection, and management of the low-level tagging hardware (the focus of this paper). The goal is to organize these services as a col- lection of mutually suspicious compartments following the princi- ple of least privilege (a zero-kernel OS [43]), so that an attacker would need to compromise multiple compartments to gain com- plete control of the machine. It is programmed in a combination of assembly and Tempest , a new low-level programming language.

...

To begin with, SAFE is (dynamically) typed at the hardware level: each data word is indelibly marked as a num- ber, an instruction, a pointer, etc. Next, the hardware is memory safe: every pointer consists of a triple of base, bounds, and offset (compactly encoded into 64 bits [17, 29]), and every pointer oper- ation includes a hardware bounds check [29]. Finally, the hardware associates each word in the registers and memory, as well as the PC, with a large (59-bit) tag.

...

Breeze [21] (a mostly functional, security-oriented, dynamic language used for user-level programming on SAFE)

...

Our goal in this paper is to develop a clear, precise, and math- ematically tractable model of one of the main innovations in the SAFE design: its scheme for efficiently supporting high-level data use policies using a combination of hardware and low-level sys- tem software. To make the model easy to work with, we simplify away many important facets of the real SAFE system. In particular, (i) we focus only on IFC and noninterference, although the tag- ging facilities of the SAFE machine are generic and can be applied to other policies (we return to this point in § 13); (ii) we ignore the Breeze and Tempest programming languages and concentrate on the hardware and runtime services; (iii) we use a stack instead of registers, and we distill the instruction set to just a handful of opcodes; (iv) we drop SAFE’s fine-grained privilege separation in favor of a more conventional user-mode / kernel-mode dichotomy; (v) we shrink the rule cache to a single entry (avoiding issues of replacement and eviction) and maintain it in kernel memory, ac- cessed by ordinary loads and stores, rather than in specialized cache hardware; (vi) we omit a large number of IFC-related concepts (dynamic principals, downgrading, public labels, integrity, clear- ance, etc.); (vii) we handle exceptional conditions, including poten- tial security violations, by simply halting the whole machine; and (viii) most importantly, we ignore concurrency, process scheduling, and interprocess communication, assuming instead that the whole machine has a single, deterministic thread of control

...

we distill the instruction set to just a handful of opcodes...Basic instruction set

"

---

gok 1 day ago [-]

Sounds like the big idea from Singularity (run everything in the same hardware security ring, use static analysis for memory safety) is going mainstream, incrementally. Unfortunate that this is happening right as it becomes clear that such a design is fundamentally unsound on modern hardware.

reply

zimbatm 1 day ago [-]

+1 for pointing at the irony but it's not really true.

1. Hardware design flaws are not irremediable and will be fixed in time.

2. It's still very useful to run software that is trusted to not be malicious and/but not trusted to be void of memory bugs.

reply

kllrnohj 18 hours ago [-]

> Hardware design flaws are not irremediable and will be fixed in time.

There's no particular reason to believe this. Neither Intel nor AMD have committed to any form of side-effect-free speculative execution or similar. That'd required a rather large chunk of transistors and die space, and if nobody is footing the bill for it it's not going to happen.

Preventing side-effects from leaking between processes (and ring levels) entirely at the hardware level is definitely going to happen. Within a process, though? That's not going to happen without multiple major players demanding it. Since nearly all the major CPU consumers are currently happy with process boundaries being the security enforcement zones, there's no particular reason to believe that in-process sandboxing will ever have hardware fixes to prevent spectre attacks.

reply

 geofft 1 day ago [-]

As I understand it, there were basically two bug classes. One was that code that makes faulty fetches across a hardware privilege boundary can infer data from how long the fault takes. Another was that characteristics of branch prediction within the same process can reveal information about the branch path not taken, even if that branch is itself a bounds check or other security check.

The first attack isn't relevant to designs that don't use hardware isolation, but the second one absolutely is. If your virtual bytecode (wasm, JVM, Lua, whatever) is allowed access to a portion of memory, and inside the same hardware address space is other memory it shouldn't read (e.g., because there are two software-isolated processes in the same hardware address space), and a supervisor or JIT is guarding its memory accesses with branches, the second attack will let the software-isolated process execute cache timing attacks against the data on the wrong side of the branch.

(I believe the names are more-or-less that Meltdown is the first bug class and Spectre is the second, but the Spectre versions are rather different in characteristics - in particular I believe that Spectre v1 affects software-isolation-only systems and Spectre v2 less so. But the names confuse me.)

reply

_wmd 1 day ago [-]

For someone with only passing attention for this stuff, your comment explains it perfectly -- thanks!

Now I'm wondering how browsers presumably already cope with this for JS, or how CloudFlare? workers cope with it, or .. etc.

reply

kllrnohj 18 hours ago [-]

The panic fix was to short term remove SharedArrayBuffer? which was necessary to create a high precision timer. But the real fix is http://www.chromium.org/Home/chromium-security/site-isolation

Browsers just resort to process sandboxing entirely. They assume JS can escape its sandbox, but since it can only read contents that were produced from its origin anyway it doesn't really matter.

reply

gok 1 day ago [-]

Ultimately, code from each security origin is going to have to run in its own process space.

reply

gok 1 day ago [-]

The big news this year was that you can't run code of different trust levels in the same address space.

reply

infinite8s 19 hours ago [-]

Actually I think the big news is that you can't run them on the same CPU.

reply

---

" Memory tagging, a new (and old) feature of hardware, can help with memory safety problems. People are working on making it happen on modern systems (paper). I don’t think it’s a replacement for fixing bugs in as systemic a way as possible (ideally, in the source language), but it has great potential to increase safety. https://en.wikipedia.org/wiki/Tagged_architecture https://llvm.org/devmtg/2018-10/slides/Serebryany-Stepanov-Tsyrklevich-Memory-Tagging-Slides-LLVM-2018.pdf https://arxiv.org/pdf/1802.09517.pdf " -- [4]

---

2011 CWE/SANS Top 25 Most Dangerous Software Errors

http://cwe.mitre.org/top25/

---

" ARMv8.3...Pointer Authentication Codes (PAC)...As explained in this https://www.qualcomm.com/documents/whitepaper-pointer-authentication-armv83 great white paper by QCom, this involves new instructions - AUTIA/PACIA - and their variants. PACIA generate an auth code in the pointer high bits, and AUTIA authenticates it before use. Since it's cumbersome to add these instructions, they have been built-in to new patterns, notably RET (now RETAA or RETAB) and LDR (now LDRAA/LDRAB). These enable the use of two keys (accessible in EL1, kernel, but not in EL0, user) of which Apple seems to be using one (A) to authenticate the pointers and put the PAC in the high bits. The PAC can then be verified, and if the validation fails, the RETA* will fail as well - meaning an end to ROP/JOP. The instructions are a drop in replacement to the original. ... Does that mean the end of exploitation? No. There's still UAFs and type confusions... " -- [5]

---

" Control Flow Integrity (CFI) describes a set of mitigation technologies that confine a program's control flow to a call graph of valid targets determined at compile-time.

... Integer Overflow Sanitization...LLVM's integer overflow sanitizers " -- [6]

---

LunaSea? 10 hours ago [-]

As a long time Node.js developer I took a look at the project but it's still a really half-baked implementation of the security flags. You almost always end up flipping all security switches off because your application needs every feature (network, filesystem, etc).

No package signing, no flags per module, no syscall whitelisting, etc.

reply

kevinkassimo 8 hours ago [-]

You turn all permissions on if you are actually using it as if it is Node for writing servers. But in the case when you are using a (potentially untrusted) code snippet to perform certain operations (like encode/decode base64 values), you do might want a sandbox. (For the case of Node-like usage, we actually also have a basic implementation of whitelisted directories/files for certain permissions, though not perfect atm)

We have also discussed about possibilities of implementing an in-memory file system. (Maybe we will also bring sessionStorage to Deno)

Flags per module is slightly trickier, while whitelisting of certain feature is trivial to implement.

Package signing is indeed a topic that Deno contributors need to discuss further and possibly finding a solution.

reply

LunaSea? 8 hours ago [-]

Sure but logic doesn't exist in a vacuum so in most cases you are going to import / export your input data or your results to the filesystem or through the network.

I actually suggested a more granular, per-module approach to this during the Node Forward and IO.js debacle years ago: https://github.com/node-forward/discussions/issues/14

At the time it was deemed to difficult to implement in Node.js after the fact, which makes sense of course. But I'm disappointed that Deno didn't go for a more bolder, more secure approach. The current system seems pretty pointless.

reply

---

wolf550e 10 hours ago [-]

AMD and others are fine. Just have security people on the team during microarchitecture design.

Spectre variant 1 is probably unavoidable so security inside a single virtual address space is kinda dead. Use separate processes and mmu.

reply

---

here's what WASM is doing. looks like 'nanoprocesses' aren't secure against Spectre et al tho and as the prev comment says, CPUs are no longer willing to provide security for sandboxing within a process

https://hacks.mozilla.org/2019/11/announcing-the-bytecode-alliance/ https://news.ycombinator.com/item?id=21515725

thread on spectre et al: https://news.ycombinator.com/item?id=21516982

---