proj-oot-ootAssemblyOpsNotes2

http://www.aviduratas.de/lisp/lispmfpga/lispinlisp.html

---

" Es ist geschafft: Ein run-time System ist geschrieben, das den Speicher der Lispm-FPGA als array abbildet. Darin werden alle für ein Lisp-System grundlegenden Lisp-Datenstrukturen abgebildet, also

    characters
    numbers
    strings
    conses
    symbols
    vectors

sowie zwei Strukturen, die für das Handling von closures und code-Vektoren nötig sind

    closures, die aus einer template und einem vector bestehen
    templates, die Vektoren aus Lisp-Daten plus untagged-Daten sind

"

"everything can now be found in" http://www.aviduratas.de/lisp/lisp/mevalxp/

(but mb http://www.aviduratas.de/lisp/lispmfpga/code.html is the latest version?) -- [1]

(skimmed that, i dont think it's worth the time to read, unless you can find a description/spec of the language e's implemented

---

READ? GET?

WRITE? PUT? SET?

EMIT? CONSUME?

---

3-Lisp shows the importance of REDUCE and NORMALIZE:

i bet these are core in Haskell as well.

---

"LISP (McCarthy?, 1958) that was built upon: cons, nil, eq, atom, car, cdr, lambda, apply and id. " -- [2] ---

MOV_OFFSET dest src offset

dest = *(src + offset)

---

http://cs.lmu.edu/~ray/notes/squid/

---

" roll...Take two numbers from the stack. The top number tells POSTSCRIPT how many times and in which direction to rotate the stack; the second number is how many items are to be rotated. "

---

https://github.com/jamesbowman/swapforth/blob/master/j1a/basewords.fs

(see section J1 in proj-plbook-plChIsaMisc for an explanation of what the stuff on the right means)

noop T alu ;
+ T+N d-1 alu ;
- N-T d-1 alu ;
xor T^N d-1 alu ;
and T&N d-1 alu ;
or T
N d-1 alu ;
invert ~T alu ;
= N==T d-1 alu ;
< N<T d-1 alu ;
u< Nu<T d-1 alu ;
swap N T->N alu ;
dup T T->N d+1 alu ;
drop N d-1 alu ;
over N T->N d+1 alu ;
nip T d-1 alu ;
>r N T->R r+1 d-1 alu ;
r> rT T->N r-1 d+1 alu ;
r@ rT T->N d+1 alu ;
io@ T _IORD_ alu io[T] alu ;
! T N->[T] d-1 alu N d-1 alu ;
io! T N->io[T] d-1 alu N d-1 alu ;
2/ T2/ alu ;
2* T2* alu ;
depth status T->N d+1 alu ;
exit T RET r-1 alu ;
hack T N->io[T] alu ;

---

---

This paper has constructions and proofs of consensus numbers of various primitives: Wait-Free Synchronization by MAURICE HERLIHY, 1991.

https://cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf

things that are proven to have an infinite consensus number in this paper are:

" Dolev et al. [7] give a thorough analysis of the circumstances under which consensus can be achieved by message-passing. They consider the effects of 32 combinations of parameters: synchronous versus asynchronous processors, synchronous versus asynchronous communication, FIFO versus non-FIFO mes- sage delivery, broadcast versus point-to-point transmission, and whether send and receiue are distinct primitives. Expressed in their terminology, our model has asynchronous processes, synchronous communication, and distinct send and receiue primitives. We model send and receiue as operations on a shared message channel object; whether delivery is FIFO and whether broadcast is supported depends on the type of the channel. Some of their results translate directly into our model: it is impossible to achieve two-process consensus by communicating through a shared channel that supports either broadcast with unordered delivery, or point-to-point transmission with FIFO delivery. Broadcast with ordered deliv- ery, however, does solve n-process consensus.

DOLEV, D., DWORK, C., AND STOCKMEYER, L. On the minimal synchronism needed for distributed consensus. J. ACM 34, 1 (Jan. 1987), 77-97. "

---

http://www.1024cores.net/home/lock-free-algorithms

" So what primitives are in your arsenal for implementation of advanced synchronization algorithms?

Compare-And-Swap Perhaps, it's the most famous primitive, it's other names are CAS, compare-and-exchange, compare-and-set, std::atomic_compare_exchange, InterlockedCompareExchange?, __sync_val_compare_and_swap, LOСK? CMPXCHG and other. It's an instance of so-called atomic RMW (read-modify-write) operation. It's pseudo-code is: T compare-and-swap(T* location, T cmp, T xchg) { do atomically { T val = *location; if (cmp == val) *location = xchg; return val; } } That is, it stores a new value (xchg) into a memory location only if it contains an expected value (cmp), in either case it returns a value that was stored in the location when the operation begins. And all that is done atomically on hardware level.

Fetch-And-Add Also atomic RMW operation, and also conducted atomically in hardware. Aka atomic_fetch_add, InterlockedExchangeAdd?, LOСK? XADD. Below is the pseudo-code: T fetch-and-add(T* location, T x) { do atomically { T val = *location; *location = val + x; return val; } } There are also variations like fetch-and-sub, fetch-and-and, fetch-and-or, fetch-and-xor.

Exchange Atomic RMW. Aka atomic_exchange, XCHG. Dead simple, but not less useful: T exchange(T* location, T x) { do atomically { T val = *location; *location = x; return val; } }

Atomic loads and stores They are not RMW (read-modify-write) operations, they are just independent atomic loads and stores. They are frequently unfairly underestimated. However, they play fundamental role in synchronization algorithms, and they are what you should generally strive for - atomic loads and stores are better/cheaper/faster than atomic RMW operations.

Mutexes and the company Why not? The most stupid thing one can do is try to implement everything in a non-blocking style (of course, if you are not writing infantile research paper, and not betting a money). Generally it's perfectly Ok to use mutexes/condition variables/semaphores/etc on cold-paths. For example, during process or thread startup/shutdown mutexes and condition variables is the way to go.

"

---

" Wait-free algorithms usually use such primitives as atomic_exchange, atomic_fetch_add...and they do not contain cycles that can be affected by other threads. atomic_compare_exchange primitive is usually not used, because it is usually tied with a "repeat until succeed" cycle. " -- [4]

(my note: however CAS should still be provided b/c it has an infinite consensus number)

we should also provide weaker consensus-number thingees though:

"any collection of read-write registers (consensus number 1), fetch-and-increments (2), test-and-set bits (2), and queues (2) is not enough to build a compare-and-swap (∞). " [5]

(note that queue with peek1 does have an infinite consensus number)

---

"The 8008's ALU supports eight operations: add, subtract, add with carry, subtract with carry, AND, OR, XOR, and compare. It also implements left and right shift and rotate operations. The 8008 also has increment and decrement instructions, extending the Datapoint 2200's instruction set. "

---

" 3 down vote

Brent Kirby establishes a number of computationally complete bases of stack operations in his Theory of Concatenative Combinators. You need some notion of “quotation” of stack terms. Using his nomenclature, the following sets of combinators are all Turing-complete:

            [B] [A] cons == [[B] A]
            [B] [A] sip  == [B] A [B]
            [B] [A] k    == A
    [D] [C] [B] [A] s'   == [[D] C] A [D] B
            [B] [A] k    == A
            [B] [A] cake == [[B] A] [A [B]]
            [B] [A] k    == A

[E] [D] [C] [B] [A] j' == [[D] A [E] B] [C] B [A] i == A

            [B] [A] take == [A [B]]
            [B] [A] cat  == [B A]
                [A] i    == A
            [B] [A] cons == [[B] A]
            [B] [A] sap  == A B

Using my preferred nomenclature, a convenient complete set to implement is:

          A dup == A A
       A B swap == B A
         A drop ==
        A quote == [A][A] [B] compose == [A B] [A] apply == A

shareeditflag

answered Feb 22 '13 at 22:03 Jon Purdy 15.8k44888 "

---

http://llvm.org/docs/Coroutines.html :

Coroutine Manipulation Intrinsics

Coroutine Structure Intrinsics

---

---

https://www.forth.com/starting-forth/2-stack-manipulation-operators-arithmetic/ first mentions the following group of stack ops: SWAP DUP OVER ROT DROP

https://en.wikipedia.org/wiki/Stack-oriented_programming_language calls these same five "the basic Forth stack operators".

we have SWAP via ROTN. We have DROP via POP to r0. So we just need: DUP OVER ROTN POP PUSH, which we currently have.

note: you can get DUP OVER with 'PICK', and ROTN is called ROLL.

http://wiki.laptop.org/go/Forth_stack_operators adds 3 that can be defined in terms of the others:

?dup ( a -- a a

-rot ( a b c -- c a b ) rot rot ; nip ( a b -- b ) swap drop ; tuck ( a b -- b a b ) swap over ;
0 ) dup if dup then ;

---

" Brent Kirby establishes a number of computationally complete bases of stack operations in his Theory of Concatenative Combinators. You need some notion of “quotation” of stack terms. Using his nomenclature, the following sets of combinators are all Turing-complete:

            [B] [A] cons == [[B] A]
            [B] [A] sip  == [B] A [B]
            [B] [A] k    == A
    [D] [C] [B] [A] s'   == [[D] C] A [D] B
            [B] [A] k    == A
            [B] [A] cake == [[B] A] [A [B]]
            [B] [A] k    == A

[E] [D] [C] [B] [A] j' == [[D] A [E] B] [C] B [A] i == A

            [B] [A] take == [A [B]]
            [B] [A] cat  == [B A]
                [A] i    == A
            [B] [A] cons == [[B] A]
            [B] [A] sap  == A B

Using my preferred nomenclature, a convenient complete set to implement is:

          A dup == A A
       A B swap == B A
         A drop ==
        A quote == [A][A] [B] compose == [A B] [A] apply == A

" -- Jon Purdy

http://softwareengineering.stackexchange.com/a/188106/167249

---

parallel: map reduce scan filter concat gather scatter

---

'query' could be for all of file listing, database query, and map / filter

---

"Go provides a smattering of blessed polymorphic types (slices, maps, chans, pointers) and functions (len, append, delete, chan send, chan recv) that go a long way. "

---

the AntShares? VM ops

---

Map, FlatMap?, Fold, and HashReduce?

" Based on the number of collections read in f and the access patterns of each read, Map can capture the behavior of a gather, a standard element-wise map, a zip, a windowed filter, or any combination thereof. ... Conditional data selection (e.g. WHERE in SQL, filter in Haskell or Scala) is a special case of FlatMap? where g produces zero or one elements. ...

---

saturated addition (QADD) (and fused 16-bit multiply and then saturated add?)

the ARM guys thought this was important:

" Enhanced DSP instructions. Adding these instructions to the standard ISA supports flex- ible and fast 16 × 16 multiply and arithmetic saturation, which lets DSP-specific routines migrate to ARM. A single ARM processor could execute applications such as voice-over- IP without the requirement of having a sepa- rate DSP. The processor can use one example of these instructions, SMLAxy , to multiply the top or bottom 16 bits of a 32-bit register. The processor could multiply the top 16 bits of register r1 by the bottom 16 bits of register r2 and add the result to register r3.

Saturation is particularly useful for digital signal processing because nonsaturations would wrap around when the inte- ger value overflowed, giving a negative result. A saturated QADD instruction returns a maximum value without wrapping around."

---

load or store multiple registers.

the ARM guys thought this was important:

" Thus, the ARM ISA instructions specifically load and store mul- tiple registers. These instructions take variable cycles to execute, depending on the number of registers the processor is transferring. This is particularly useful for saving and restoring con- text for a procedure’s prologue and epilogue. This directly improves code density, reduces instruction fetches, and reduces overall power consumption."

---

conditional execution

the ARM guys thought this was important:

" Conditional execution. An ARM instruction executes only when it satisfies a particular con- dition. The condition is placed at the end of the instruction mnemonic and, by default, is set to always execute. This, for example, generates a savings of 12 bytes—42 percent—for the great- est common divisor algorithm implemented with and without conditional execution. "

---

a 'preload' instruction that sends out a LOAD request to memory but doesnt block yet

---

" Interprocessor communication. An SMP OS requires communication between CPUs, which sometimes is best accomplished without accessing memory. ... Systems frequently must also synchronize asynchronously. One such mechanism uses the device’s interrupt sys- tem to cause activity on a remote processor. These software-initiated interprocessor inter- rupts (IPI) typically use an interrupt system designed to interface interrupts from I/O peripherals rather than another CPU. "

---

"Now, imagine neural networks that would be "augmented" in a similar way with programming primitives such as for loops—but not just a single hard-coded for loop with a hard-coded geometric memory, rather, a large set of programming primitives that the model would be free to manipulate to expand its processing function, such as if branches, while statements, variable creation, disk storage for long-term memory, sorting operators, advanced datastructures like lists, graphs, and hashtables, and many more. The"

---

Ethereum's EVM is introducing a STATIC_CALL opcode [6] that is a function call that guarantees that the function being called can have no side-effects.

---

https://blockstream.com/simplicity.pdf

much of the following is quoted or paraphrased from that link. i already copied this to plChMiscIntermedLangs.

types:

core Simplicity has 9 types of terms:

Simplicity's 'bit machine' has "two non-empty stacks of data frames. One stack holds read-only frames, and the other stack holds write-only frames. A data frame consists of an array of cells and a cursor pointing either at one of the cells or pointing just beyond the end of the array. Each cell contains one of three values: a zero bit, a one bit, or an undefined value." and 10 instructions:

---

---

trying to identify the (supposedly 10; http://wiki.osdev.org/C_Library claims that with PDClib "10 (plus one optional) required syscalls need to be implemented") syscalls in PDcLib?:

https://bitbucket.org/pdclib/pdclib/src/c8dc861df697/platform/posix/?at=default

'externs':

fork execve wait _exit unlink link environ

also in files in there we also see the following external functions being used:

mmap munmap open read write lseek close signal raise

and these, but i don't think they are syscalls: strncmp strlen

also note that file '/dev/urandom' is used (in 'tmpfile')

also there's a bunch of constants, such as stdin, stdout, stderr, in https://bitbucket.org/pdclib/pdclib/src/c8dc861df697a6c8bddbcbf331d9b6fcae6e2f4d/platform/posix/functions/_PDCLIB/_PDCLIB_stdinit.c?at=default&fileviewer=file-view-default

and a few more constants in https://bitbucket.org/pdclib/pdclib/src/c8dc861df697a6c8bddbcbf331d9b6fcae6e2f4d/platform/posix/includes/signal.h?at=default&fileviewer=file-view-default

and a few more in https://bitbucket.org/pdclib/pdclib/src/c8dc861df697a6c8bddbcbf331d9b6fcae6e2f4d/platform/posix/internals/_PDCLIB_config.h?at=default&fileviewer=file-view-default

---

http://www.embecosm.com/appnotes/ean9/ean9-howto-newlib-1.0.html

"...requires an implementation of eighteen system calls and the definition of one global data structure..."

5.3. Standard System Call Implementations

    5.3.1. Error Handling
    5.3.2. The Global Environment, environ
    5.3.3. Exit a program, _exit
    5.3.4. Closing a file, close
    5.3.5. Transfer Control to a New Process, execve
    5.3.6. Create a new process, fork
    5.3.7. Provide the Status of an Open File, fstat
    5.3.8. Get the Current Process ID, getpid
    5.3.9. Determine the Nature of a Stream, isatty
    5.3.10. Send a Signal, kill
    5.3.11. Rename an existing file, link
    5.3.12. Set Position in a File, lseek
    5.3.13. Open a file, open
    5.3.14. Read from a File, read
    5.3.15. Allocate more Heap, sbrk
    5.3.16. Status of a File (by Name), stat
    5.3.17. Provide Process Timing Information, times
    5.3.18. Remove a File's Directory Entry, unlink
    5.3.19. Wait for a Child Process, wait
    5.3.20. Write to a File, write

---

so comparing the PDClib's platform-specific layer and newlib, the following are in the intersection:

fork execve wait _exit unlink link environ open read write lseek close raise ('kill' is used in newlib to send a signal) memory allocation (sbrk in newlib, mmap in PDClib?)

additional functionality only in newlib: fstat getpid isatty stat times

additional functionality only in PDClib: memory deallocation (munmap) signal handler setup (signal) /dev/urandom

---

i already noted this elsewhere, but we should have a floating-point FMAC (fused multiply-add or fused multiply-accumulate)

---

compare-and-branch instructions:

---

Elementary matrix row operations [7]. These are used in Gaussian elimination and Gauss-Jordan elimination and to define the elementary matrices (which themselves are the generators of the general linear group of invertible matrices):

---

[8] (actually [9]) has a diagram with a bunch of circles in which the example system calls are:


some primitives for transactional memory:

---

https://github.com/CyberGrandChallenge/libcgc/blob/master/libcgc.h

(the CGC Application Binary Interface)

has the following syscalls:

this is part of the DECREE OS, a special simplified OS especially for the Cyber Grand Challenge.

---

Fundamental operations in key-value datastores are OPEN, CLOSE, GET, PUT, DELETE [10].

---

some fundamental matrix ops (thanks D.R.):

some fundamental mathy matrix ops:

---

some floating point ops:

+ - * /

pow sqrt sin cos tan asin acos atan atan2

log exp

fmod

ceil floor

and from webassembly [11]:

see also https://en.wikibooks.org/wiki/C_Programming/math.h#Pre-C99_functions

the RISC-V floating point instructions:

" dandrews on May 9, 2015 [-]

Takes me back a bit. In 1980 I was writing an APL interpreter for the Apple ][ (alas, not half completed) and purchased a copy of "Software Manual for the Elementary Functions" by Cody and Waite. It still sits on my bookshelf 35 years later, awaiting full translation of its algorithms into 6502 code. SQRT, ALOG/ALOG10, EXP, POWER, SIN/COS, TAN/COT, ASIN/ACOS, ATAN/ATAN2, SINH, COSH, TANH, and random number generation. Also useful and comprehensive testing notes for your nascent implementation, with tests written in Fortran. "

---

the conditional instructions remaining in ARM64:

CSEL Wd = if cond then Wn else Wm CSINC d = if cond then n else m+1 CSINV d = if cond then n else !m CSNEG d = if cond then n else -m

other conditional things:

CNEG Conditional negate CCMN (immediate) Conditional compare negative (immediate), setting condition flags to result of comparison or an immediate value CCMN (immediate) CCMN (register) Conditional compare negative (register), setting condition flags to result of comparison or an immediate value CCMN (register) CCMP (immediate) Conditional compare (immediate), setting condition flags to result of comparison or an immediate value CCMP (immediate) CCMP (register) Conditional compare (register), setting condition flags to result of comparison or an immediate value CCMP (register) CINC Conditional increment CINC CINV Conditional invert CSET (set destination register to 1 if condition is true, o/w to 0) CSETM (set all bits of destination register to 1 if condition is true, o/w to 0)

you can see what these are at http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100069_0609_00_en/pge1427897722020.html

conditionals in x86 include (from [13]): CMOVcc FMOVcc Jcc SETcc

---

https://pdfs.semanticscholar.org/1703/4bf6acffd6f2a454ce9dab48bcbb6149d7f4.pdf

A Novel RISC Processor with Crypto Specific Instruction Set Bhagyasree.P,C.Silpa,M.J.C.Prasad

3 instructions: BITP, BYTP and RXOR

BITP: permute bits:

takes 4 operands: a source, a target, and two permutation arguments, PR1 and PR2. PR2 gives the bits of SOURCE to be copied into those bits of TARGET designated by PR1.

BYTP: permute bytes:

takes 3 operands: source, target, permutation. For example, if the permutation bitwidth is 8, then you can permute 4 bytes: the n-th byte in the target comes from the byte in the source specified by the two bits in the permutation starting at at position 2*n.

RXOR: reduce of XORs of word:

takes 2 operands; a mask, and a source. Outputs one bit into a flag register. The source is first BITAND'd with the mask, and then the bits in the resulting word are REDUCED by the operation XOR.

---

from the above it seems like the following may be useful:

PERMUTE-BITS: takes a word and a permutation (only one permutation arg, not 2) and permutes the bits in the word

PERMUTE-WORDS: takes a stack and a permutation (only one permutation arg, not 2) and permutes the items in the stack

RXOR: takes a word (no mask) and reduces the bits in the word by the operation XOR

---

some examples of old CISC instructions [14]:

FFT, QUICKSORT, POLY

array access with bounds checking

X86 String Operations eg REP MOVS DEST SRC (pseudocode semantics/implementation given at [15] )

in VAX:

---

[16]

a proposal for some SIMD instructions

---

https://github.com/LLK/scratch-flash/tree/develop/src/primitives https://github.com/kach/recreational-rosette/tree/master/basis claims that this can be reduced from the 99 primitives in the files above down to 74.

---

some other subsets of RISC-V:

http://www-inst.eecs.berkeley.edu/~cs61c/sp18/projs/02-2/ http://www-inst.eecs.berkeley.edu/~cs61c/sp18/projs/03-2/

the intersection of those two subsets:

add mul sub sll mulh slt xor div srl or rem and lb lh lw addi slli slti xori srli ori andi sw beq lui jal

ones in one but not the other (symmetric set difference): sra srai sb sh swge bne blt bltu jalr

---

RISCV RV32I minus fence*, e*, csr*, scall, sbreak, rd* is just:

compare to the above intersection of the two subsets in the previous section: add mul sub sll mulh slt xor div srl or rem and lb lh lw addi slli slti xori srli ori andi sw beq lui jal

RV32I omits mul mulh div rem.

Here is what the above intersection subset omits, and is also omitted in both of the parent subsets in the previous section: auipc bge bgeu lbu lhu sltiu sltu

Here is what the above intersection subset omits, but is in one of the parent subsets in the previous section: jalr bne blt bltu sb sh srai sra

Here is the intersection of RV32I and the intersection of the two subsets in the previous section (this is just the intersection subset minus mul mulh div rem):

---

" Smalltalk (programming language): How many opcodes are in the Squeak VM's instruction set?

Eliot Miranda Eliot Miranda, Author of the Cog and BrouHaHa? Smalltalk VMs. Ex tech lead for VisualWorks? Smalltalk ('99 - '06). Updated Nov 6, 2016 · Author has 118 answers and 106k answer views

That's more difficult to answer than I'd like because the question can be interpreted a few different ways.

...

If we try and construct the minimal set we have

(receiver variable + literal variable + temporary variable + indirect temporary variable) * (push + store)

push literal + push(create) closure + push Array + push receiver + push thisContext

pop + dup

returnTopFromMethod + returnTopFromBlock

jump + jumpFalse + jumpTrue

send + send super + send special

callPrimitive

That's 24.

...

returnTopFromMethod: returns to the sender of the current method (unwinding the stack if invoked from within a closure),

...

most Squeak implementations will distinguish three different sets of sends. Set one comprises the normal sends that send a message whose selector is a literal in the current method with a given number of arguments to some object on the stack. Set two comprises the singleton super send which sends a message whose selector is a literal in the current method with a given number of arguments to the receiver on the stack, but starts the lookup above the class in which the current method occurs (this send is used to invoke methods in superclasses that are overridden in subclasses). Set three comprises the "special selector" sends, which send a message whose selector is one of 32 frequently used messages, which are stored in a shared Array. Hence special selector sends save space in the method's literal frame since the selector is taken from the array, not the method's literals. Since the special selector sends include selectors for arithmetic and comparison most implementations implement these sends to first check for SmallInteger? (fixnum) arguments inline, and if so, shortcut the send, or short-cut a comparison followed by a conditional jump.

...

I hope the above explains why I think of the Squeak bytecode set as comprising about 30 opcodes, one of which, callPrimitive:, is an escape into an arbitrary set of unsafe operations to be used only by an optimizing compiler that will evolve over time.

...

We sort the inline primitive operations by arity. Nullary primitives occupy the 0-999 range. Unary primitives occupy the 1-1999 range, etc.

We define the following inlined primitives:

1000 unchecked class

1001 unchecked pointer numSlots

1002 unchecked pointer basicSize

1003 unchecked byte8Type format numBytes (includes CompiledMethod?)

1004 unchecked short16Type format numShorts

1005 unchecked word32Type format numWords

1006 unchecked doubleWord64Type format numDoubleWords

2000 unchecked SmallInteger? #+. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2001 unchecked SmallInteger? #-. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2002 unchecked SmallInteger? #*. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2003 unchecked SmallInteger? #/. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2004 unchecked SmallInteger? #. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2005 unchecked SmallInteger? #
. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2006 unchecked SmallInteger? #quo:. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2016 unchecked SmallInteger? #bitAnd:. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2017 unchecked SmallInteger? #bitOr:. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2018 unchecked SmallInteger? #bitXor:. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2019 unchecked SmallInteger? #bitShift:. Both arguments are SmallIntegers? and the result fits in a SmallInteger? (* depends on word size)

2032 unchecked SmallInteger? #>. Both arguments are SmallIntegers?

2033 unchecked SmallInteger? #<. Both arguments are SmallIntegers?

2034 unchecked SmallInteger? #>=. Both arguments are SmallIntegers?

2035 unchecked SmallInteger? #<=. Both arguments are SmallIntegers?

2036 unchecked SmallInteger? #=. Both arguments are SmallIntegers?

2037 unchecked SmallInteger? #~=. Both arguments are SmallIntegers?

2064 unchecked Pointer Object>>at:. The receiver is guaranteed to be a pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?

2065 unchecked Byte Object>>at:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The result is a SmallInteger?.

2066 unchecked 16-bit Word Object>>at:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The result is a SmallInteger?.

2067 unchecked Word Object>>at:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The result is a SmallInteger?.

2068 unchecked DoubleWord? Object>>at:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The result is a SmallInteger? or a LargePositiveInteger?.

2069 unchecked QuadWord? Object>>at:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The result is a SmallInteger? or a LargePositiveInteger?.

3000 unchecked Pointer Object>>at:put:. The receiver is guaranteed to be a pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?

3001 unchecked Byte Object>>at:put:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The argument is a SmallInteger?. The primitive stores the least significant 8 bits.

3002 unchecked Word Object>>at:put:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The argument is a SmallInteger?. The primitive stores the least significant 16 bits.

3003 unchecked DoubleWord? Object>>at:put:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The argument is a SmallInteger?. The primitive stores the least significant 32 bits.

3004 unchecked QuadWord? Object>>at:put:. The receiver is guaranteed to be a non-pointer object. The 0-relative (1-relative?) index is an in-range SmallInteger?. The argument is a SmallInteger?. The primitive stores the least significant 64 bits.

" -- [17]

---