proj-oot-ootCapabilitiesNotes1

read stdio, write stdout, write stderr

read/write other streams, files

read/write/execute any file under location (note: read privilage is sufficient to import a Oot library, unless it uses extra-Oot extensions) network communications, other sensors/effectors used via special locations

ulimit #spawned threads, amount of memory

see http://lua-users.org/wiki/SandBoxes section "Table of Variables"

note: it's important that "globals" aren't truly global, and can be hidden from function calls upon demand (also must hide even 'hidden' scoped globals)

should also be able to pass in read-only variable references in order to give the program a way to create other 'pipes' into the untrusted code (maybe)

CALLINGFUNCTION

---

--

shall we implement immutable, unaliasable, purity (see ootDataModes) as capabilities, the same as security capabilities?

each value has attached capabilities, which map onto std attributes (see ootAttributes) or custom attributes (e.g. for stuff like Rails permissions). there are operators to remove capabilities.

sometimes you want to allow debug code to do something that you don't want normal code to do, e.g. printing stuff to the console from a pure function. this should be possible if the point of capabilities is safety, but not if it is security. perhaps a concept like 'user groups' could help here? if you want to disallow something for safety, not security, leave the capability on for the DEBUG group, but if the purpose is security, then turn it off there too.

in essence these user groups allow the capabilities to be removed, but then to resurface when the value is passed to the right type of code segment. this seems to remove the need for a facility to pass capabilities around on their own, although maybe we should allow that too, in case i've forgotten something.

also need 'lockable' and 'unaliasable unless locked'

...in general, are capabilities just masking of object attributes (edges)?

in that case i guess there would be some sort of meta attribute that lists which attributes have been masked.. this list itself would be masked unless you are in the 'admin' group

---

" Loopholes in Object-Oriented Programming Languages

Some object-based programming languages (e.g. JavaScript?, Java, and C#) provide ways to access resources in other ways than according to the rules above including the following:

        direct assignment to the instance variables of an object in Java and C#
        direct reflexive inspection of the meta-data of an object in Java and C#
        the pervasive ability to import primitive modules, e.g. java.io.File that enable external effects.

Such use of undeniable authority effectively defeats the security benefits of the Object-capability model. Caja and Joe-E are variants of JavaScript? and Java, respectively, that impose restrictions to eliminate these loopholes. "

-- http://en.wikipedia.org/wiki/Object-capability_model

---

capability access controls on variables to pass read-only references, etc

---

"AppArmor? ("Application Armor") is a Linux kernel security module that allows the system administrator to restrict programs's capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths."

---

part of making oot good for sandboxing is being able to enforce a timeout on any function calls or subprocesses, but in a way that gives the called function or subprocess a chance to respond before the hard cutoff (eg a 'soft timeout' where an exception is raised within the called function/subprocess, followed by a 'hard timeout' where the called function or subprocess is unilaterally terminated)

---

list of capabilities in Linux:

http://man7.org/linux/man-pages/man7/capabilities.7.html

---

david-given 7 hours ago [-]

Roughly put: in a capability based system, if you have a valid handle for a service, then you can use that service. But the only way you can get a valid handle is to ask your parent process for one --- handles are unforgeable. So your parent gets to check that you're legitimate.

...but your parent, in turn, has limited permissions, because the only way it can get a handle is to ask its parent. And when you ask your parent for a filesystem handle, your parent doesn't have to give you the handle it has. Your parent might give you a limited filesystem handle which only allows writes to /tmp.

...and when you, in turn, start a subprocess --- say, an instance of sqlite for use as a database --- you have to give it handles for the things it wants to do on your behalf. But your filesystem handle only allows writes to /tmp. That means that your sqlite instance can only write to /tmp too.

There's more to it than that, because in real life you also want to forward capabilities to other running services as part of RPCs, but you tend to end up with a natural container system with multiple redundant layers of isolation, all running at minimum necessary privilege, and all hardware-mediated.

Another really interesting capability-based OS is Genode:

https://genode.org/about/screenshots

https://genode.org/documentation/general-overview/index

It'll allow you to run complete OS kernels as capability-restricted Genode processes, effectively allowing virtual machines as first-class citizens.

reply

Klathmon 7 hours ago [-]

That's actually really beautifully simple.

Thanks for this explanation, it really helped the idea "click"

reply

naasking 4 hours ago [-]

And for another beautiful convergence: nowadays most software development is done in languages that naturally express capability patterns, namely memory-safe languages. That is, if you have a reference to an object or value, you have the authority to invoke any of the methods on that object or call any functions that accept such a value. So object references are capabilities.

Most such languages only go too far by allowing "ambient authority", whereby any component in your program has the ability to turn a reference to a value that carries little authority, into one that carries significantly more authority. For instance, consider the file open API: you're essentially turning a string, which exposes no unsafe or security-critical methods, into an object that can literally destroy your computer. And literally any library or component you use can open files. It's madness!

To make a memory safe language capability secure, you simply remove all sources of ambient authority. This mainly consists of globals that permits transitive access to mutable state, like file system APIs, global static variables that can reference transitively mutable data, etc.

These insecure APIs are then replaced with equivalent object instances which you can pass around, eg. you can only receive the authority to open a file if you're given an instance to the directory object which contains that file. The entry point of your program would then change to accept various capabilities instead of just string parameters.

reply

catern 3 hours ago [-]

And as an even deeper correspondence: Once your language is memory-safe and capability-secure, you don't even need a kernel, let alone a microkernel. Rather than use hardware address spaces to separate and modularize components (the primary idea behind microkernels), you just pass capabilities around to your components. One component can't crash the whole system, because all it can access is the APIs you pass in to it. If you want to add isolated processes to your OS, just implement actors in your language: They're equivalent.

Of course, you can always have a bug in your APIs that allows an attack or bug to propagate. But that was always the case even without capability-safety. Capabilities can't give you everything for free. :)

reply

nickpsecurity 2 hours ago [-]

JX Operating System does something like this within a JVM. It runs on a microkernel but I think the mediation is app/language level.

reply

monocasa 7 hours ago [-]

So, going back to Win32, it's as if OpenFile? also took a HANDLE that represented your abilities (or capabilities if you will ) within the security model, with the explicit ability to forward these handles (or new handles that represent a subset of the original's capabilities) to other processes if you choose.

reply

jvert 14 minutes ago [-]

In Win32, this handle is called the AccessToken? https://msdn.microsoft.com/en-us/library/windows/desktop/aa3... and the calling thread's current access token is used by OpenFile? to grant or deny the requested access.

reply

---

adrianratnapala 7 hours ago [-]

I'm no Windows expert, so I didn't know that windows handles were used as security primitives.

I thought they were just a bit like file-descriptors or X11 window ids, or indeed pointers. Such handles do have a kind of role in authorization: once a process has convinced the system to give it some resource, then the (handle, processid) pair is all the system needs to check access.

However you typically gain the handle through something like `open()`, i.e. an ACLed request for a named resource. But with true capabilities you just inherit authorisation from some other capability -- possibly one granted to you by a different process.

That said, the difference from existing systems might be small. Namespaces are really useful, and are probably here to stay. But as long as things can be accessed by names, the access will need to be controlled by something like an ACL.

reply

naasking 4 hours ago [-]

> Namespaces are really useful, and are probably here to stay. But as long as things can be accessed by names, the access will need to be controlled by something like an ACL.

Not necessarily. If your system supports first-class namespaces, then you can just build a namespace consisting of only the objects to which a program should have access. No need for any further access control.

A file open dialog from a program is simply the program's request for you to map a file into its namespace.

reply

adrianratnapala 25 minutes ago [-]

Hmm, that sounds a bit awkward -- every capability on the system would, at least conceptually need a separate copy of the pruned name-tree. There might be good ways of implementing that, but it's not obvious.

On the other hand, I can see how capabilities would mix with ACLed namespaces to improve security. Essentially it would define the semantics of things like cgroups and chroot jails.

In fact sometimes I think that mainstream virtualisation is slowly reinventing -- in an incremental way -- what various alternative OSes advocated in a radical way. E.g. Linux cgroups and "one process per container" sounds quite a lot like Plan9's per-process view of the system.

reply

catern 4 hours ago [-]

>Not necessarily. If your system supports first-class namespaces, then you can just build a namespace consisting of only the objects to which a program should have access. No need for any further access control.

Unfortunately this is usually quite heavyweight. In Unix-like systems, and in Plan 9, the canonical way to do this is to implement a filesystem. "Filesystem" is the universal IPC layer. But implementing a filesystem usually takes quite a lot of effort.

Do you know of any systems where this can be done easily enough for it to be used ubiquitously?

One would much prefer to just put together a hierarchy of objects at the language level and automatically expose it... or something like that anyway.

reply

naasking 3 hours ago [-]

> "Filesystem" is the universal IPC layer. But implementing a filesystem usually takes quite a lot of effort.

It needn't be. A file system is just a set of nested hashtables. The complexity of traditional file systems comes from the durable representation, but that isn't necessarily needed for first-class namespaces. You can just serialize and deserialize a namespace as needed on top of an ordinary file system.

> Do you know of any systems where this can be done easily enough for it to be used ubiquitously?

Plan 9 obviously. The Plash capability secure shell [1]. There are probably a couple of others, but not too many overall.

[1] http://www.cs.jhu.edu/~seaborn/plash/plash-orig.html

reply

---

"basic exploit mitigations - including stack cookies, safe unlinking and access permission protection (by means of an MPU)." [1]

---

ridiculous_fish 12 hours ago [-]

In Midori, objects are capabilities. This is possible because type safety is enforced and an object's type cannot be forged.

It's easy to imagine a capabilities system built around closures. For example, a capability to send on a socket could be an unforgeable closure that performs that sending. Are there any systems that work this way?

reply

andrewflnr 11 hours ago [-]

More details on Midori: http://joeduffyblog.com/2015/11/03/blogging-about-midori/ This is the first of several blog posts by one of the authors. I haven't read all of them, but liked what I read so far. Does anyone have other resources with more detail?

reply

---

 ago [+7]

dm319 2 days ago [-]

Can anyone tell me how secure web browsers are on linux generally. I was looking at the pwn2own results and was quite impressed with the edge compromise that got out of the host OS it was running in. However, I didn't see any web browsers on linux being compromised - is that because it is harder, or no-one bothered?

reply

lima 2 days ago [-]

Comparable to Windows. Windows is a high-profile target for web browsing so you're seeing more exploits.

Web browser escapes usually target the operating system kernel in order to break out of the sandbox. Chrome can reduce its attack surface on Linux using seccomp, and while there's a win32 syscall filter on Windows 10, I'm not sure if Chrome uses it. Windows has stronger kernel self protection features than Linux, but also more attack surface.

Chrome uses Linux namespaces (like LXC/Docker/...) for isolation in addition to seccomp-bpf.

Chrome is by far the most secure one right now, but as evidenced by this blog article, Firefox is catching up.

reply

ryuuchin 1 day ago [-]

Chrome disables all win32k syscalls in its renderer/extension/plugin processes (at the very least flash and pdf). This has been the default for a while now.

I believe the specific win32k syscall filter you're referring to (where you can white/black list specific syscalls) is still only accessible to Microsoft applications (Edge uses it).

reply

johncolanduoni 2 days ago [-]

The Chromium sandbox supports the Win32k mitigation, though it’s possible it’s not enabled at runtime.

reply

ryuuchin 1 day ago [-]

Back when grsecurity was still public I would've said that using it with Chrome provided the best security you could get on Linux. Between the defensive mitigations it added through PaX? and all the other little things (like making chroots actually behave like jails) it greatly increased security. This is without even using any MAC system.

I'm not entirely sure how things stack up now. In the past I never had much faith in the vanilla linux kernel.

reply

---

https://www.blackhat.com/docs/eu-17/materials/eu-17-Arnaboldi-Exposing-Hidden-Exploitable-Behaviors-In-Programming-Languages-Using-Differential-Fuzzing-wp.pdf

found and describes vulnerabilities in several interpreted languages

---

linux kernel capabilities:

http://man7.org/linux/man-pages/man7/capabilities.7.html

---

" It's also debatable whether mlock() is a proper way to protect sensitive information. According to POSIX, mlock()ing a page guarantees that it

---

security

[2]

Building Hardware Systems for Information Flow Tracking (2010)

interesting security idea:

(quoted:)

DIFT: Dynamic Information Flow Tracking

DIFT taints data from untrusted sources: Extra tag bit per word marks if untrusted

Propagate taint during program execution

Check for unsafe uses of tainted data

eg Tainted code execution

eg Tainted pointer dereference (code & data)

eg Tainted SQL command

Operating System: Tag Aware; Save/restore tags, Cross-process info flow

Policy granularity: operation type:

Propagate Policy Example: load

Enables 1. Propagate only from source register 2. Propagate only from source address 3.

Propagate only from both sources OR mode or AND mode or XOR mode

Check Policy Example: load

Check Enables 1.Check source register ( If Tag(r1)==1 then security_trap ) 2.Check source address If Tag(M[r1+offset])==1 then security_trap

Both enables may be set simultaneously

Support for checks across multiple tag bits

Performance: <1% slowdown

4 bits/word in registers, caches, memory

12.5% storage overhead

---

[3]

" A key feature of SAFE is that every piece of data, down to the word level, is annotated with a tag representing policies that govern its use. While the tagging mechanism is very general, one partic- ularly interesting use of tags is for representing information-flow control (IFC) policies. For example, an individual record might be tagged “This information should only be seen by principals Alice or Bob ,” a function pointer might be tagged “This code is trusted to work with Carol ’s secrets,” or a string might be tagged “This came from the network and has not been sanitized yet.” Such tags repre- senting IFC policies can involve arbitrary sets of principals, and principals themselves can be dynamically allocated to represent an unbounded number of entities within and outside the system.

...

rather than using just a few “taint bits,” SAFE associates a word-sized tag to every word of data in the machine—both memory and registers. In particular, SAFE tags can be pointers to arbitrary data structures in memory. The inter- pretation of these tags is left entirely to software

...

SAFE’s system software performs process scheduling, stream- based interprocess communication, storage allocation and garbage collection, and management of the low-level tagging hardware (the focus of this paper). The goal is to organize these services as a col- lection of mutually suspicious compartments following the princi- ple of least privilege (a zero-kernel OS [43]), so that an attacker would need to compromise multiple compartments to gain com- plete control of the machine. It is programmed in a combination of assembly and Tempest , a new low-level programming language.

...

To begin with, SAFE is (dynamically) typed at the hardware level: each data word is indelibly marked as a num- ber, an instruction, a pointer, etc. Next, the hardware is memory safe: every pointer consists of a triple of base, bounds, and offset (compactly encoded into 64 bits [17, 29]), and every pointer oper- ation includes a hardware bounds check [29]. Finally, the hardware associates each word in the registers and memory, as well as the PC, with a large (59-bit) tag.

...

Breeze [21] (a mostly functional, security-oriented, dynamic language used for user-level programming on SAFE)

...

Our goal in this paper is to develop a clear, precise, and math- ematically tractable model of one of the main innovations in the SAFE design: its scheme for efficiently supporting high-level data use policies using a combination of hardware and low-level sys- tem software. To make the model easy to work with, we simplify away many important facets of the real SAFE system. In particular, (i) we focus only on IFC and noninterference, although the tag- ging facilities of the SAFE machine are generic and can be applied to other policies (we return to this point in § 13); (ii) we ignore the Breeze and Tempest programming languages and concentrate on the hardware and runtime services; (iii) we use a stack instead of registers, and we distill the instruction set to just a handful of opcodes; (iv) we drop SAFE’s fine-grained privilege separation in favor of a more conventional user-mode / kernel-mode dichotomy; (v) we shrink the rule cache to a single entry (avoiding issues of replacement and eviction) and maintain it in kernel memory, ac- cessed by ordinary loads and stores, rather than in specialized cache hardware; (vi) we omit a large number of IFC-related concepts (dynamic principals, downgrading, public labels, integrity, clear- ance, etc.); (vii) we handle exceptional conditions, including poten- tial security violations, by simply halting the whole machine; and (viii) most importantly, we ignore concurrency, process scheduling, and interprocess communication, assuming instead that the whole machine has a single, deterministic thread of control

...

we distill the instruction set to just a handful of opcodes...Basic instruction set

"

---

gok 1 day ago [-]

Sounds like the big idea from Singularity (run everything in the same hardware security ring, use static analysis for memory safety) is going mainstream, incrementally. Unfortunate that this is happening right as it becomes clear that such a design is fundamentally unsound on modern hardware.

reply

zimbatm 1 day ago [-]

+1 for pointing at the irony but it's not really true.

1. Hardware design flaws are not irremediable and will be fixed in time.

2. It's still very useful to run software that is trusted to not be malicious and/but not trusted to be void of memory bugs.

reply

kllrnohj 18 hours ago [-]

> Hardware design flaws are not irremediable and will be fixed in time.

There's no particular reason to believe this. Neither Intel nor AMD have committed to any form of side-effect-free speculative execution or similar. That'd required a rather large chunk of transistors and die space, and if nobody is footing the bill for it it's not going to happen.

Preventing side-effects from leaking between processes (and ring levels) entirely at the hardware level is definitely going to happen. Within a process, though? That's not going to happen without multiple major players demanding it. Since nearly all the major CPU consumers are currently happy with process boundaries being the security enforcement zones, there's no particular reason to believe that in-process sandboxing will ever have hardware fixes to prevent spectre attacks.

reply

 geofft 1 day ago [-]

As I understand it, there were basically two bug classes. One was that code that makes faulty fetches across a hardware privilege boundary can infer data from how long the fault takes. Another was that characteristics of branch prediction within the same process can reveal information about the branch path not taken, even if that branch is itself a bounds check or other security check.

The first attack isn't relevant to designs that don't use hardware isolation, but the second one absolutely is. If your virtual bytecode (wasm, JVM, Lua, whatever) is allowed access to a portion of memory, and inside the same hardware address space is other memory it shouldn't read (e.g., because there are two software-isolated processes in the same hardware address space), and a supervisor or JIT is guarding its memory accesses with branches, the second attack will let the software-isolated process execute cache timing attacks against the data on the wrong side of the branch.

(I believe the names are more-or-less that Meltdown is the first bug class and Spectre is the second, but the Spectre versions are rather different in characteristics - in particular I believe that Spectre v1 affects software-isolation-only systems and Spectre v2 less so. But the names confuse me.)

reply

_wmd 1 day ago [-]

For someone with only passing attention for this stuff, your comment explains it perfectly -- thanks!

Now I'm wondering how browsers presumably already cope with this for JS, or how CloudFlare? workers cope with it, or .. etc.

reply

kllrnohj 18 hours ago [-]

The panic fix was to short term remove SharedArrayBuffer? which was necessary to create a high precision timer. But the real fix is http://www.chromium.org/Home/chromium-security/site-isolation

Browsers just resort to process sandboxing entirely. They assume JS can escape its sandbox, but since it can only read contents that were produced from its origin anyway it doesn't really matter.

reply

gok 1 day ago [-]

Ultimately, code from each security origin is going to have to run in its own process space.

reply

gok 1 day ago [-]

The big news this year was that you can't run code of different trust levels in the same address space.

reply

infinite8s 19 hours ago [-]

Actually I think the big news is that you can't run them on the same CPU.

reply

---

" Memory tagging, a new (and old) feature of hardware, can help with memory safety problems. People are working on making it happen on modern systems (paper). I don’t think it’s a replacement for fixing bugs in as systemic a way as possible (ideally, in the source language), but it has great potential to increase safety. https://en.wikipedia.org/wiki/Tagged_architecture https://llvm.org/devmtg/2018-10/slides/Serebryany-Stepanov-Tsyrklevich-Memory-Tagging-Slides-LLVM-2018.pdf https://arxiv.org/pdf/1802.09517.pdf " -- [4]

---

2011 CWE/SANS Top 25 Most Dangerous Software Errors

http://cwe.mitre.org/top25/

---

" ARMv8.3...Pointer Authentication Codes (PAC)...As explained in this https://www.qualcomm.com/documents/whitepaper-pointer-authentication-armv83 great white paper by QCom, this involves new instructions - AUTIA/PACIA - and their variants. PACIA generate an auth code in the pointer high bits, and AUTIA authenticates it before use. Since it's cumbersome to add these instructions, they have been built-in to new patterns, notably RET (now RETAA or RETAB) and LDR (now LDRAA/LDRAB). These enable the use of two keys (accessible in EL1, kernel, but not in EL0, user) of which Apple seems to be using one (A) to authenticate the pointers and put the PAC in the high bits. The PAC can then be verified, and if the validation fails, the RETA* will fail as well - meaning an end to ROP/JOP. The instructions are a drop in replacement to the original. ... Does that mean the end of exploitation? No. There's still UAFs and type confusions... " -- [5]

---

" Control Flow Integrity (CFI) describes a set of mitigation technologies that confine a program's control flow to a call graph of valid targets determined at compile-time.

... Integer Overflow Sanitization...LLVM's integer overflow sanitizers " -- [6]

---

LunaSea? 10 hours ago [-]

As a long time Node.js developer I took a look at the project but it's still a really half-baked implementation of the security flags. You almost always end up flipping all security switches off because your application needs every feature (network, filesystem, etc).

No package signing, no flags per module, no syscall whitelisting, etc.

reply

kevinkassimo 8 hours ago [-]

You turn all permissions on if you are actually using it as if it is Node for writing servers. But in the case when you are using a (potentially untrusted) code snippet to perform certain operations (like encode/decode base64 values), you do might want a sandbox. (For the case of Node-like usage, we actually also have a basic implementation of whitelisted directories/files for certain permissions, though not perfect atm)

We have also discussed about possibilities of implementing an in-memory file system. (Maybe we will also bring sessionStorage to Deno)

Flags per module is slightly trickier, while whitelisting of certain feature is trivial to implement.

Package signing is indeed a topic that Deno contributors need to discuss further and possibly finding a solution.

reply

LunaSea? 8 hours ago [-]

Sure but logic doesn't exist in a vacuum so in most cases you are going to import / export your input data or your results to the filesystem or through the network.

I actually suggested a more granular, per-module approach to this during the Node Forward and IO.js debacle years ago: https://github.com/node-forward/discussions/issues/14

At the time it was deemed to difficult to implement in Node.js after the fact, which makes sense of course. But I'm disappointed that Deno didn't go for a more bolder, more secure approach. The current system seems pretty pointless.

reply

---

wolf550e 10 hours ago [-]

AMD and others are fine. Just have security people on the team during microarchitecture design.

Spectre variant 1 is probably unavoidable so security inside a single virtual address space is kinda dead. Use separate processes and mmu.

reply

---

here's what WASM is doing. looks like 'nanoprocesses' aren't secure against Spectre et al tho and as the prev comment says, CPUs are no longer willing to provide security for sandboxing within a process

https://hacks.mozilla.org/2019/11/announcing-the-bytecode-alliance/ https://news.ycombinator.com/item?id=21515725

thread on spectre et al: https://news.ycombinator.com/item?id=21516982

---

instead of using fat pointers for capabilities, could use a 'primary key' that is unique to each object (and never reused, the way that memory addresses are). CHERI has had good luck with that approach (at higher levels of the software stack), apparently: https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/201904-asplos-cheriabi.pdf

---

when unserializing arbitrary data types, be sure not to unserialize capabilities (or any other way to elevate access or the rights assigned to a type)

---

https://glyph.twistedmatrix.com/2020/08/never-run-python-in-your-downloads-folder.html

summary: in Python, if you have accidentally downloaded malware to your downloads folder with a bad name, like 'pip', and then you run something that runs or imports pip from your downloads folder (and/or while the current working directory is your downloads folder?), then the malware can be executed

---

"...“confused deputy”, which is when a system was tricked into performing an action that the initiator of the action was not supposed to have the permission to do, but a middle-layer (deputy) did." -- https://weinholt.se/articles/non-posix-filesystems/

---

" A vir-tualizable architecture would allow a virtual machine todirectly execute on the real hardware while guaranteeingthat the virtual machine monitor (VMM) retains controlof the CPU. This is done by running the operating sys-tem in the virtual machine, the guest operating system,in non-privileged mode while the VMM runs in priv-ileged mode. ARM is not virtualizable because thereare a number of sensitive instructions, used by operat-ing systems, which do not generate a trap when executedin non-privileged mode. ... Popek and Goldberg [12] define sensitive instructionsas the group of instructions where the effect of their ex-ecution depends on the mode of the processor or the lo-cation of the instruction in physical memory. A sensi-tive instruction is also privileged if it always generatesa trap, when executed in user mode. The VMM canonly guarantee correct guest execution without the useof dynamic translation if all sensitive instructions arealso privileged. In other words, an architecture is vir-tualizable if and only if the set of sensitive instructionsis a subset of the set of privileged instructions. If thatis the case, the VMM can be implemented using a clas-sic trap-and-emulate solution. Unfortunately, ARM isnot virtualizable as the architecture defines both sensi-tive privileged instructions and sensitive non-privilegedinstructions " -- http://systems.cs.columbia.edu/files/wpid-ols2010-kvmarm.pdf

For example, RFE on ARMv7:

"ARMv7 is notclassically virtualiz-able[32] because, among other reasons, the return-from-exception instruction,RFE, isnot defined to trap when executed in user mode [79, 8" [7]

"Usage You can use RFE to return from an exception if you previously saved the return state using the SRS instruction. Rn is usually the SP where the return state information was saved. Operation Loads the PC and the CPSR from the address contained in Rn, and the following address. Optionally updates Rn. " -- [8]

"RFE is control-sensitive because it loads values into theprogram counter (R15) and theCPSRfrom memory" [9]

"The currently active processor state is represented inthe current program status register (CPSR). It stores theprocessor modeM, data memory endianness and interruptmask bits among other fields" [10]

"Thesta-tus register access instructionsCPS,MRS,MSR,RFE,SRSread and write the CPSR and the SPSR. Writesto privileged bits are ignored when the CPU is in usermode and access to the SPSR is unpredictable in usermode" [11]

for example, openRISC:

"OpenRISC? is not classically virtualizable because the return-from-exception instruc-tion,L.RFE, is defined to function normally in user mode, rather than trapping" [12]

for example, x86:

"The ISA is not classically virtualizable, since some privileged instructions silently failin user mode rather than trapping. VMware’s engineers famously worked around thisdeficiency with intricate dynamic binary translation software" [13] ---

goldsteinq 1 day ago [–]

I was curious about how `Deno.permissions` prevents things like "requesting the permissions, then clearing the screen and asking to press 'g'". Apparently, `Deno.permissions.request()` just stops program execution waiting for user response, so it's impossible to do anything after this call. It also seems to reset text color, so it's impossible to make text invisible. iTerm custom escape sequences allows program to change color profile on the fly. Setting bg and fg to the same value hides Deno messages and allows someone to conceal which permissions are being granted.

reply

goldsteinq 1 day ago [–]

Issue: https://github.com/denoland/deno/issues/9666

reply

caspervonb 1 day ago [–]

Thank you! we'll look into it!

reply

---

mike_hearn 3 months ago

parent [–]on: Dependency Confusion: How I Hacked Into Apple, Mic...

They are not in tension. The Java security architecture is a mix of capability and module-level security.

It's probably worth posting a quick refresher. The system is old but people don't use it much these days, and the documentation isn't that good. At one point I wrote a small JavaFX? PDF viewer that sandboxed the PDF rendering code, to learn the system. I lost the source code apparently, but the hard part wasn't coding it (only a small bit of code was required), it was learning how to configure and use it. I tested the sandbox by opening a PDF that contained an exploit for an old, patched security bug and by using an old, vulnerable version of the PDFbox library. The sandbox successfully stopped the exploit.

Fortunately the Java team still maintain the sandbox and via new technology like the module system and GraalVM?, are reinforcing it. In fact, GraalVM? introduces a new sandboxing technology as well that's simpler to use than the SecurityManager?, however, it's also probably less appropriate for the case of blocking supply chain attacks.

Java's internal security is based on two key ideas:

1. Code that can protect its private state. When the SecurityManager? is enabled and a module is sandboxed, it isn't allowed to use reflection to override field or method visibility.

2. Stack walks.

Let's tackle these backwards. Imagine it's time to do something privileged, like open a file. The module containing the file API will be highly privileged as it must be able to access native code. It will have a method called read() or something like that. Inside that method the code will create a new permission object that represents the permission to open files under a certain path. Then it will use AccessController?, like this:

   FilePermission perm = new FilePermission("/temp/testFile", "read");
   AccessController.checkPermission(perm);

The checkPermission call will then do a stack walk to identify the defining module of every method on the stack. Each module has its own set of granted permissions, the access controller will intersect them to determine what permissions the calling code should have. Note: intersection. That means if any unprivileged code is on the stack at all the access check fails and checkPermission will throw an exception. For example, if an unprivileged module registers a callback from a highly privileged module, that doesn't work: the low privileged module will be on the stack and so the privilege is dropped.

Access control contexts are themselves reified as objects, so instead of doing a permission check immediately you can 'snapshot' the permissions available at a certain point and use it later from somewhere else. And, starting a thread copies the permissions available at that point into the new thread context. So you cannot, in the simple case, elevate privilege.

Stack walking and permission intersection is slow. It was optimised a lot in Java 9 and 10 so the performance impact of enabling sandboxing is much less than it once was, but it's clearly not zero overhead. Therefore the JVM provides other techniques. One is the notion of a capability, known from many other systems. Instead of doing a permission check on every single file read (slow), do it once and then create a File object. The File object allows reading of the underlying native file via its private fields. Whoever has a pointer to the File object can thus read from it. Because pointers cannot be forged in Java, this is secure as long as you don't accidentally lose your pointer or pass it to code that shouldn't have it.

Sometimes you need to wrap a privileged operation to "dilute" it somehow. For example, imagine you have a module that allows arbitrary socket access. You also have an HTTP client. You would like the HTTP client to have network access, but for it to be usable by other modules that should only be able to contact specific hosts. Given what I've described so far that wouldn't work: the highly privileged code that can do native calls would do a stack walk, discover the unprivileged module on the stack and throw an exception. But there's a fix: AccessController?.doPrivileged. This is kind of like sudo. It takes a lambda and truncates the stack that's examined for access checks at the point of use. Therefore it allows a module to use its own assigned permissions regardless of who is calling it. Of course, that is powerful and must be used carefully. In this case the HTTP client would itself check a different, HTTP specific permission. If that permission passed, then it would assert its own power to make arbitrary network connections and go ahead and use the lower level API.

There are a few more pieces but they aren't core. One is the class called SecurityManager?. This is the most famous part of the API but in fact, it's no longer really needed. SecurityManager? simply delegates to AccessController? now. Its API is slightly more convenient for the set of built in permissions. For the purposes of understanding the design you can effectively ignore it. The SecurityManager? needs to be activated using a system property as otherwise, for performance reasons, permission checks are skipped entirely at the check sites. Beyond that it can be left alone, or alternatively, customised to implement some unusual security policy. Another piece is the policy language. Permissions are not intrinsic properties of a module in the JVM but rather assigned via an external file. The final piece is the module system. This isn't relevant to the sandbox directly, but it makes it easier to write secure code by adding another layer of protection around code to stop it being accessed by stuff that shouldn't have access to it. After a careful review of the old JVM sandbox escapes from the applet days, the Java team concluded that the module system would have blocked around half of them.

So as you can see the design is very flexible. There's really nothing else like it out there, except maybe .NET CAS but I believe they got rid of that.

Unfortunately there are some pieces missing, if we want to re-awaken this kraken.

The first is that modules have no way to advertise what permissions they need to operate. That has to be specified in an external, per-JVM file, and there are no conventions for exposing this, therefore build tools can't show you permissions or integrate the granting of them.

The second is that some code isn't sandbox compatible. The most common reason for this is that it wants to reflectively break into JVM internals, for example to get better performance. Of course that's not allowed inside a sandbox.

A third is that some code isn't secure when sandboxed because it will, for example, create a File object for your entire home directory and then put it into a global public static field i.e. it doesn't treat its capabilities with care. The module system can help with this because it can make global variables less global, but it's still not ideal.

The final piece is some sort of community consensus that sandboxing matters. Bug reports about sandboxing will today mostly be ignored or closed, because developers don't understand how to use it and don't see the benefit. It's fixable with some better tutorials, better APIs, better tooling etc. But first people have to decide that supply chain attacks are a new thing that matters and can't be ignored any longer.

ryukafalz 3 months ago [–]

> Sometimes you need to wrap a privileged operation to "dilute" it somehow. For example, imagine you have a module that allows arbitrary socket access. You also have an HTTP client. You would like the HTTP client to have network access, but for it to be usable by other modules that should only be able to contact specific hosts. Given what I've described so far that wouldn't work: the highly privileged code that can do native calls would do a stack walk, discover the unprivileged module on the stack and throw an exception.

Not sure if this is something Java enables, but in principle you could do this in a capability-style way as well. Let's say you have an HTTP client module that you want to allow another module to use, but only to make requests to a specific host. You could write a wrapper with a subset of the HTTP client's functionality, only including (for example) a send() method that would send an HTTP request to the specified host. You'd then pass that to the module that you want to be able to make HTTP connections (rather than the raw HTTP client), and provided your wrapper object doesn't expose functionality from the underlying module that would let a client specify arbitrary hosts, you're in a pretty good spot.

mike_hearn 3 months ago [–]

That's the same thing I was just describing but recursed another level. It doesn't help by itself. Something needs to have permission to use the higher level of privilege - raw network access in my example, 'raw' http client access in yours. And something else needs to check that permission. Yes, you could wrap that privilege in a capability afterwards, but the reason Java has both capabilities and stack walking is because something needs to authenticate code and then authorise the production of a capability to start with.

ryukafalz 3 months ago [–]

Maybe I'm missing something about the use case, but I'm not sure I quite follow.

Sure, something needs to have permission to use the higher level of privilege. On your typical POSIX OS, your program is probably born with the ability to create arbitrary TCP/UDP sockets by default; on a capability OS, maybe you've explicitly provided it with access to your network stack. Regardless, at the entry point to your program you presumably have modules providing arbitrary network access in scope somehow.

If I'm understanding correctly, the case you described is that you have an HTTP client module that you'd like to have direct access to the network, but you'd like to restrict the consumers of the HTTP client to only querying certain hosts. From the start of your program, you'd instantiate an HTTP client (passing it a capability to use the network interface) then instantiate one of those HTTP client proxy objects that only allows communication with one host (passing it a capability to use the HTTP client). From there, you pass the capability to that proxy object to the unprivileged consumer of the module.

This seems to work without any kind of stack walking authentication logic, just normal variable scope, provided the language is capability-based. Am I missing something?

nmadden 3 months ago [–]

Exactly. What usually happens in capability systems is that the main() method gets all the capabilities (or whatever capabilities the user allowed it) and then does dependency injection to distribute those to other components. No need for complex stack-based authentication or policy rule evaluation.

Indeed, if you look at the history of Java sandbox escapes they are largely confused deputy attacks: some privileged code source can be tricked into doing something it shouldn’t do.

mike_hearn 3 months ago [–]

You can build a sandboxing language without any sort of stack walking. SEL4+C does this. It doesn't have especially good usability at scale, and it's not easy to modularise.

You're imagining a system where there's no specific authentication system for code. Instead in order to use a library, you need to explicitly and manually obtain all the capabilities it needs then pass them in, and in main() you get a kind of god object that can do everything that then needs to be progressively wrapped. If a library needs access to a remote service, you have to open the socket yourself and pass that in, and the library then needs to plumb it through the whole stack manually to the point where it's needed. If the library develops a need for a new permission then the API must change and again, the whole thing has to be manually plumbed through. This is unworkable when you don't control all the code in question and thus can't change your APIs, and as sandboxing is often used for plugins, well, that's a common problem.

There's no obvious way to modularise or abstract away that code. It can't come from the library itself because that's what you're trying to sandbox. So you have to wire up the library to the capabilities yourself. In some cases this would be extremely painful. What if the library in question is actually a networking library like Netty? There could be dozens or hundreds of entry points that eventually want to open a network connection of some sort.

What does this god object look like? It would need to hold basically the entire operating system interface via a single access point. That's not ideal. In particular, loading native code would need to also be a capability, which means any library that optimised by introducing a C version of something would need to change its entire API, potentially in many places. This sort of design pattern would also encourage/force every library to have a similar "demi-god" object approach, to reduce the pain of repeatedly passing in or creating capabilities. Sometimes that would work OK, other times it wouldn't.

The stack walking approach is a bit like SELinux. It allows for a conventional OO class library, without the need for some sort of master or god object, and all the permissions things need can be centralised in one place. Changes to permissions are just one or two extra lines in the security config file rather than a potentially large set of code diffs.

Now all that said, reasonable people can totally disagree about all of this. The JVM has been introducing more capability objects with time. For example the newer MethodHandle? reflection object is a capability. FileChannel? is a capability (I think!). You could build a pure capability language that runs on the JVM and maybe someone should. Perhaps the usability issues are not as big a deal as they seem. It would require libraries to be wrapped and their APIs changed, including the Java standard library, but the existing functionality could all be reused. The new libraries would just be a thin set of wrappers and forwarders over pre-existing functionality, but there'd be no way for anything except the god object to reach code that'd do a stack walk. Then the security manager can be disabled, and no checks will occur. It'd be a pure object capability approach.

nmadden 3 months ago [–]

> If a library needs access to a remote service, you have to open the socket yourself and pass that in, and the library then needs to plumb it through the whole stack manually to the point where it's needed.

You don't need to do this. There are a variety of ways to handle this, just as you would any other kind of dependency injection:

1. Design libraries to actually be modular so that dependencies (including capabilities) can be injected just where they are needed.

2. Pass in a factory object that lets the library construct sockets as and when it needs them. You can then enforce any arbitrary checks at the point of creating the socket. (This is much more flexible than a Java policy file).

3. Use a powerbox pattern [1] to allow the user to be directly asked each time the library attempts to open a socket. This is not always good UX, but sometimes it is the right solution.

> If the library develops a need for a new permission then the API must change and again, the whole thing has to be manually plumbed through.

Capturing permission requirements in the API is a good thing! With the stack walking/policy based approach I won't know the library needs a new permission until some library call suddenly fails at runtime.

[1]: http://wiki.erights.org/wiki/Powerbox

mike_hearn 3 months ago [–]

The policy file isn't required, by the way. That's just a default implementation. My PDF viewer had a hard-coded policy and didn't use the file.

OK, so in a pure capability language how would you implement this: program A depends on dynamically loaded/installed plugin B written by some third party, that in turn depends on library C. One day library C gets a native implementation of some algorithm to speed it up. To load that native library requires a capability, as native code can break the sandbox. However:

1. You can't change the API of C because plugin B depends on it and would break.

2. You can't pass in a "load native library" capability to plugin B because you don't know in advance that B wants to use C, and if you did, B could just grab the capability before it gets passed to C and abuse it. So you need to pass the capability directly from A to C. But now A has to have a direct dependency on C and initialise it even if it's not otherwise being used by A or B.

Stack walking solves both these problems. You can increase the set of permissions required by library C without changing its callers, and you don't have the problem of needing to short-circuit everything and create a screwed up dependency graph.

With the stack walking/policy based approach I won't know the library needs a new permission until some library call suddenly fails at runtime

You often wouldn't need to. What permissions a module has is dependent on its implementation. It's legitimate for a library to be upgraded such that it needs newer permissions but that fact is encapsulated and abstracted away - just like if it needed a newer Java or a newer transitive dependency.

ryukafalz 3 months ago [–]

> OK, so in a pure capability language how would you implement this: program A depends on dynamically loaded/installed plugin B written by some third party, that in turn depends on library C. One day library C gets a native implementation of some algorithm to speed it up. To load that native library requires a capability, as native code can break the sandbox.

Now, I'm a little outside my area of expertise due to not having worked with capability systems very much yet. (There aren't that many of them and they're still often obscure, so even just trying to gain experience with them is difficult at this point.)

But that said... in an ideal capability system, isn't the idea that native code could just break the sandbox also wrong? I would imagine that in such a system, depending on another module that's running native code would be just fine, and the capability system's constraints would still apply. Maybe that could be supported by the OS itself on a capability OS; maybe the closest thing we'll get to native code for that on our existing POSIX systems is something like WASI[0].

> You often wouldn't need to. What permissions a module has is dependent on its implementation. It's legitimate for a library to be upgraded such that it needs newer permissions but that fact is encapsulated and abstracted away - just like if it needed a newer Java or a newer transitive dependency.

If our goal is to know that the dependencies we're using don't have more authority than they need, isn't it a problem if a module's permissions may increase without explicit input from the module's user (transitive or otherwise)?

[0] https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas...

nmadden 3 months ago [–]

One of the foundations of object-capability security is memory safety, so loading arbitrary native code does subvert that. You can get around this by, for example, requiring native code to be loaded in a separate process. As you say, a capability OS and/or CPU architecture [1] is able to confine native code.

> isn’t it a problem if a module’s permissions may increase without explicit input from the module’s user (transitive or otherwise)?

Exactly right.

[1]: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

mike_hearn 3 months ago [–]

isn't it a problem if a module's permissions may increase without explicit input from the module's user (transitive or otherwise)?

The modules permissions can't increase without explicit input e.g. changes to the policy file. But the person who cares about the sandbox integrity is the user of the overall software or computing system. The plugin developer doesn't really care how the API is implemented or what permissions it needs. They just want it to work. The person who cares is the person who owns the resource or data an attacker may be trying to breach.

nmadden 3 months ago [–]

Typically libraries don’t directly load one another. The language runtime does this.

mike_hearn 3 months ago [–]

Yes, but authorisation to do so must come from somewhere. In Java it's ambient. In a pure caps system, I'm not sure how it'd work.

nmadden 3 months ago [–]

The beauty of object-capability security is that it completely aligns with normal object-oriented design. So you can always recast these discussions to not be about security: how would I inject any other new dependency I needed without changing the API of all intermediaries? And there is a whole literature of design patterns for doing this.

mike_hearn 3 months ago [–]

All you'd do there is make the injector a semantic equivalent of the AccessController?. The injector must have some sort of security policy after all, to decide whether a component is allowed to request injection of a capability. Whether you structure it as a single subsystem is responsible for intercepting object construction and applying policy based on the home module of what's being constructed, or whether you determine that module via stack walks, the end result is very similar: some central engine decides what components can do and then applies that policy.

The Java approach is nice because it avoids any need for DI. DI is not a widely accepted pattern. There are no DI engines that would have any support for this kind of policy-driven injection. And whilst popular in some areas of software like Java web servers, it hardly features in most other languages and areas, there are no programming languages with built in support for it and that includes modern languages like Kotlin. DI engines meanwhile have changed how they work pretty radically over time - compare something like the original Spring XML DI to Guice to Dagger3. Plus, DI is awkward when the dependency you need isn't a singleton. How would I express for example "I need a capability injected that gives me access to the $HOME/.local/cache/app-name directory"? Annotation based DI struggles with this, but with the AccessController? it's natural: the component just requests what it needs, and that's checked against a policy, which can be dynamically loaded from a file, or built by code.

nmadden 3 months ago [–]

Your argument has gone from “this is impossible with capabilities” to “this doesn’t scale” to “nobody uses design patterns”.

You are confusing DI frameworks (often terrible) with the general concept of dependency injection, which is in fact extremely widely used.

nmadden 3 months ago [–]

The File example is a good illustration of why Java is _not_ a capability-secure language. Every File object in Java has a getParentFile() method that allows you to navigate up the hierarchy right to the root and then from there access every file on the filesystem. Java’s standard library is full of these kinds of design flaws. So in practice you can only apply capability-based thinking to small subsets of a codebase and have to fallback on the (much weaker) stack walking checks if you want strong isolation.

The problem with Java’s stack walking is that it is too complex and too easy to find privileged code that can be coaxed into performing unintended operations. There are plenty of old write ups of Java sandbox bypass bugs due to this, eg http://benmmurphy.github.io/blog/2015/10/21/zdi-13-075-2013-...

mike_hearn 3 months ago [–]

I shouldn't have used File as an example, that was confusing. I was trying to explain capabilities and stack walking in an abstract sense but was also using Java as a concrete example. Big mistake.

You're right that java.io.File isn't a capability. It just represents a file path with a few utility methods to list files, and therefore does a stack walk when you try to access the filesystem. A FileChannel? is a file capability in the sense I meant above, because it represents an opened file, not a path. There's an access check once, when it's opened, and then the rest of the time there aren't any stack walks.

It's a pity that Ben Murphy didn't write up all his bugs. There are only two listed there. A few patterns cropped up repeatedly in the old sandbox escapes:

1. Accessing internal code you weren't meant to have access to. Often, some sorts of privileged pseudo-reflection API. Fixing this is the goal of Jigsaw.

2. Serialization acting as a back door, allowing the internal private state of classes to be tampered with. Serialization security has been improved with time and they're now working on a feature that will make it harder to screw this up, by allowing serialization to use normal constructors instead of this ad-hoc form of reflection.

3. Overly general frameworks that allowed attackers to construct nearly arbitrary programs out of privileged objects by chaining them together (this crops up in gadget attacks too). There's probably no platform level fix for this, people just have to be aware of the risks when working in sandboxed context.

I don't think a pure capability language is workable, to be honest. At least not at a large scale. In the purest sense you need a god object passed into the start of your program which vends all possible capabilities, and every library would require the user to construct and pass in all resources it needs externally, including things it might possibly need. And that code isn't easily modularised because you can't just use helpers provided by the library itself: that's the very same code you're trying to sandbox. There's no good way to make that usable or extensible. The combination of stack walking and capabilities lets you find the right balance in terms of API design between simplicity of use and sandbox simplicity.

nmadden 3 months ago [–]

Are you aware of the history of object-capability programming languages? There are multiple actual demonstrations of real-world ocaps programming languages and projects built with them:

It's actually not at all unworkable to use object-capability for large programs. In fact, one of the main benefits of ocaps is how well it aligns with well-established good software design principles such as dependency injection, avoiding singletons, avoiding global mutable state, and so on.

mike_hearn 3 months ago [–]

I know about E and Midori. I haven't looked at the others. As far as I know the only one that could realistically be said to have been used for large programs was Midori but very little about it was ever published, just a few blog posts. And Midori was cancelled. Presumably it wasn't so compelling.

I'd like to see a more modern attempt that wasn't as totally obscure as those other languages. However, nobody is doing that.

saagarjha 3 months ago [–]

Escaping the VM via an out-of-bounds write doesn't really show that stack walking is broken :/

nmadden 3 months ago [–]

That’s just the first example. As the author of that series writes, most of the exploits are not due to memory corruption. Most are confused deputy attacks where privileged code can be tricked into performing dangerous operations.

saagarjha 3 months ago [–]

Oh, I don't doubt it. I'm just saying that the particular example you linked wasn't that great.

---

https://awesomekling.github.io/pledge-and-unveil-in-SerenityOS/

---

https://blog.sunfishcode.online/introducing-cap-std/

https://lobste.rs/s/rlwby3/introducing_cap_std_capability_based

---

"

Unix machines, since the early days of the operating system, have been designed for multiple users to use concurrently. Traditionally there is a set of “unprivileged” users used by people and system services, and the root account which can generally do anything. Because of the concept that most things in Unix are represented by a file, users could be allowed to perform various operations by adding them to groups and using filesystem permissions. There were also other functions which could not be delegated in this way–notably, binding to certain IP ports. Various operating systems developed over the years have blurred these lines a little, particularly on Linux which has features like capabilities and ACLs that allow more control than the standard Unix permission model provides. Linux goes much further than this. There are mandatory access control systems like SELinux or AppArmor? that let you apply restrictions at the kernel level outside of the software you're running. Features like `cgroups` and namespaces combine to provide what we now call containers. Other features like `seccomp` allow software to opt-in to limits on its own ability to use various system calls.

BSD operating systems have similar features, notably pledge and unveil on OpenBSD?.

"

---

" With conventional languages and frameworks, this would be injection, #1 on the list of top 10 web application security flaws:

    Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

But using object capability discipline, untrusted code has only the authority that we explicitly give it. This rich form of cooperation comes with dramatically less vulnerability [1]. The environment in this example is safeScope, which is the same environment modules are evaluated in – it provides basic runtime services such as constructors for lists, maps, and other structures, but no “powerful” objects. In particular, makeTCP4ServerEndpoint? is not in scope when the remote code is executed, so the code cannot use it to access the network. Neither does the code have any access to read from nor write to files, clobber global state, nor launch missiles. " [14]

---

https://josephg.com/blog/node-sandbox/ https://news.ycombinator.com/item?id=30988034

lucacasonato 14 hours ago

next [–]

Deno core team member and TC39 delegate here.

We have pondered about capability based security for Deno in the past. Our conclusion has always been that this is not possible to do securely in JS without freezing all prototypes and objects by default. The reasoning for this is that you need to make sure the capability token does not ever leak. For example as a malicious user I could override `globalThis.fetch` to exfiltrate the capability token destined for `fetch` and use it myself later.

One could also override `Map.prototype.set` / `Map.prototype.get` to exfiltrate a token every time it is added or removed from a `Map` (people will want to store tokens in a `Map`).

One could also override `Array.prototype[Symbol.iterator]` to exfiltrate tokens stored in arrays if those arrays are destructored, spread, etc.

There are many more cases like this, where one can exfiltrate tokens because of the very dynamic nature of JavaScript?.

It is unlikely that freezing all intrinsic prototypes and objects is even enough. People will find ways to exfiltrate tokens.

reply

bakkoting 10 hours ago

parent next [–]

o/ Luca! Fellow TC39 delegate here.

> It is unlikely that freezing all intrinsic prototypes and objects is even enough. People will find ways to exfiltrate tokens.

This is probably true, but frozen intrinsics would make it a _lot_ harder. Right now it's not reasonable to ask a library to be defensive against capability exfiltration, since it means not using any built-ins, but I think with frozen intrinsics it would be reasonable to treat a library leaking its capabilities as a security bug. There would still absolutely be leaks - most significantly in libraries which export classes and don't freeze the class prototype - but things would no longer be completely insecure by default. It would make malicious code have to work a _lot_ harder.

I think it's worth a shot. Deno removed the __proto__ getter/setter, and that did require a bunch of libraries to update, but it worked out OK.

Node already has --frozen-intrinsics, if anyone feels like experimenting with whether that would break your code.

reply

lucacasonato 10 hours ago

root parent next [–]

Yeah, I agree it is definitely worth trying! I think all the talk around SES will push JS as a whole further towards something that could support capability based permissions securely in the future.

reply

3np 13 hours ago

parent prev next [–]

Doesn't SES address that? The only fundamental barrier right now seems to be performance, which could be addressed by runtime support.

https://github.com/tc39/proposal-ses

https://github.com/endojs/endo

reply

lucacasonato 13 hours ago

root parent next [–]

Yup, SES would address this. But SES also needs to bring with it a paradigm shift for JS:

a) Folks would have to load each bit of code they want separate permissions for in a separate compartment. This won't be easy. b) Runtimes will need to provide an immutable global realm, which is not something that is the case right now.

As I said in a different commeent, I think a lot of this can already be addressed by ShadowRealms?. Deno will likely allow users to specify per ShadowRealm? permissions, which is probably as granular as most people will want to get.

reply

zozbot234 13 hours ago

parent prev next [–]

These seem like features that could work a bit better in the context of WASM modules and/or components. AIUI, WASM was designed with the expectation that support for capabilities would be required.

reply

lucacasonato 13 hours ago

root parent next [–]

Yup, for sure. I think capability based ShadowRealms? are also totally doable. I'm sure we'll add those for Deno once we add support for ShadowRealms?.

reply

-- https://news.ycombinator.com/item?id=30988034

https://github.com/tc39/proposal-ses

---

    15
    Typed vs Untyped Virtual Machines virtualization pointersgonewild.com
    via neauoire 27 hours ago | suggest | flag | hide | save | archive | 2 comments
    Markdown formatting available

~ Sophistifunk 20 hours ago

link flag

I’d like to see something in-between. Let the code do as it pleases with data values, but not jump/call to them, and I’d like to see support for capabilities built in to the bytecode. So you’d have some way to store and send them, but not inspect the actual bit values. The same protection mechanism could be used to treat jump destinations as (un-inspectable) values, so they can be retrieved from tables but not calculated at run-time.

    ~
    david_chisnall 5 hours ago | link | flag | 

I’d like a WebAssembly? variant that provides a CHERI abstract machine (I know of two groups that are exploring this). There’s no need to prevent inspecting capabilities, you just need to prevent forging them. Any security design based around secrets is problematic because you’ll often find that side channels can leak the secrets.

---

"pledge(): “I promise I will only do X, Y and Z” unveil(): “I will only access paths X, Y and Z, so hide everything else.” " -- [15]

(comparison of pledge to other capabilities systems: https://learnbchs.org/pledge.html )

https://man.openbsd.org/pledge.2

some of this is hard to implment in linux, see https://news.ycombinator.com/item?id=32097723 and other discussions on that post https://news.ycombinator.com/item?id=32096801

this is what someone implemented in linux:

[16] " -p PLEDGE Defaults to -p 'stdio rpath'. It's repeatable. May contain any of following separated by spaces: See also the Promises section below which goes into much greater depth on what each category does.

        stdio: allow stdio and benign system calls
        rpath: read-only path ops
        wpath: write path ops
        cpath: create path ops
        dpath: create special files
        flock: file locks
        tty: terminal ioctls
        recvfd: allow SCM_RIGHTS
        fattr: allow changing some struct stat bits
        inet: allow IPv4 and IPv6
        unix: allow local sockets
        dns: allow dns
        proc: allow fork, clone and friends
        thread: allow clone
        id: allow setuid and friends
        exec: allow executing ape binaries "

('ape binaries' is another project that this author wrote, when abstracting this, just think binaries)

more specific documentation for each of these (list of allowed syscalls) is at the end of that page

" stdio allows close, dup, dup2, dup3, fchdir, fstat, fsync, fdatasync, ftruncate, getdents, getegid, getrandom, geteuid, getgid, getgroups, getitimer, getpgid, getpgrp, getpid, getppid, getresgid, getresuid, getrlimit, getsid, wait4, gettimeofday, getuid, lseek, madvise, brk, arch_prctl, uname, set_tid_address, clock_getres, clock_gettime, clock_nanosleep, mmap (PROT_EXEC and weird flags aren't allowed), mprotect (PROT_EXEC isn't allowed), msync, munmap, nanosleep, pipe, pipe2, read, readv, pread, recv, poll, recvfrom, preadv, write, writev, pwrite, pwritev, select, send, sendto (only if addr is null), setitimer, shutdown, sigaction (but SIGSYS is forbidden), sigaltstack, sigprocmask, sigreturn, sigsuspend, umask, socketpair, ioctl(FIONREAD), ioctl(FIONBIO), ioctl(FIOCLEX), ioctl(FIONCLEX), fcntl(F_GETFD), fcntl(F_SETFD), fcntl(F_GETFL), fcntl(F_SETFL). rpath (read-only path ops) allows chdir, getcwd, open(O_RDONLY), openat(O_RDONLY), stat, fstat, lstat, fstatat, access, faccessat, readlink, readlinkat, statfs, fstatfs. wpath (write path ops) allows getcwd, open(O_WRONLY), openat(O_WRONLY), stat, fstat, lstat, fstatat, access, faccessat, readlink, readlinkat, chmod, fchmod, fchmodat. cpath (create path ops) allows open(O_CREAT), openat(O_CREAT), rename, renameat, renameat2, link, linkat, symlink, symlinkat, unlink, rmdir, unlinkat, mkdir, mkdirat. dpath (create special path ops) allows mknod, mknodat, mkfifo. flock allows flock, fcntl(F_GETLK), fcntl(F_SETLK), fcntl(F_SETLKW). tty allows ioctl(TIOCGWINSZ), ioctl(TCGETS), ioctl(TCSETS), ioctl(TCSETSW), ioctl(TCSETSF). recvfd allows recvmsg(SCM_RIGHTS). fattr allows chmod, fchmod, fchmodat, utime, utimes, futimens, utimensat. inet allows socket(AF_INET), listen, bind, connect, accept, accept4, getpeername, getsockname, setsockopt, getsockopt, sendto. unix allows socket(AF_UNIX), listen, bind, connect, accept, accept4, getpeername, getsockname, setsockopt, getsockopt. dns allows socket(AF_INET), sendto, recvfrom, connect. proc allows fork, vfork, kill, getpriority, setpriority, prlimit, setrlimit, setpgid, setsid. thread allows clone, futex, and permits PROT_EXEC in mprotect. id allows setuid, setreuid, setresuid, setgid, setregid, setresgid, setgroups, prlimit, setrlimit, getpriority, setpriority, setfsuid, setfsgid. exec allows execve, execveat, access, faccessat. On Linux this also weakens some security to permit running APE binaries. However on OpenBSD? they must be assimilate beforehand. On Linux, mmap() will be loosened up to allow creating PROT_EXEC memory (for APE loader) and system call origin verification won't be activated. execnative allows execve, execveat. Can only be used to run native executables; you won't be able to run APE binaries. mmap() and mprotect() are still prevented from creating executable memory. System call origin verification can't be enabled. If you always assimilate your APE binaries, then this should be preferred. "

---