notes-visual-visualization

if you have a distribution cut into parts along multiple dimensions, and you want readers to add up different (integrate out) one dimension before they get what they are looking for, use an icon array, not a pie chart.

e.g. here's a study that gave physicians the same information in four different ways. The goal is to determine if the 'investigational treatment' is much better or much worse than the 'conventional treatment'. The icon graph caused study participants to give much better answers:

http://www.ncbi.nlm.nih.gov/core/lw/2.0/html/tileshop_pmc/tileshop_pmc_inline.html?title=Click%20on%20image%20to%20zoom&p=PMC3&id=27896_eltl3914.f1.jpg

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC27896/

---

an idea for a VR (somewhat 'cyberspace') style of exploring high-dimensional data. We are going to explore one dependent variable (a scalar value, as in a 2-D chart) plotting against N independent values, where N can be greater than or equal to 3.

Now, if N were 3, then theoretically every point in 3-D space should be colored by a value. But if you did that, all the points right in front of you would occlude the points further away, making it hard to see the whole picture.

So instead, what we will do is have a 3-D grid of small objects (spheres or cubes), surrounded by empty space. The color of the objects indicates the value at that point. The empty space prevents the nearby points from occluding everything else and so allows you to see into the distance in all directions. Note that the size of the objects is held constant, rather than being used to indicate anything; this allows you to use your sense of perspective to see which spheres receding into the distance are closer to you. It would look a lot like this, except that typically you'd probably rotate this by 45 degrees so that there is a sphere above, below, to the left, and to the right of you, rather than up-left, up-right, down-left, and down-right, and except that this picture doesn't show the color values:

https://www.pinterest.com/pin/387168899209869765/

Extension: we can plot up to 3 dependent variables at once in each sphere by using RGB color channels.

Extension: we can browse documents which are arrayed in 3-dimensional space by having the documents in place of the spheres

Extension: for 6 dimensions of independent variables:

we surround each sphere with a partial ('minor') grid of smaller spheres which are very close to it (going about 1/3 of the way to the next main ('major') grid point, if even). Call the main gridlines the 'major' gridlines, and call these new ones the 'minor' gridlines.

These new 'minor' grid lines go in more directions, but the directions are pointing towards vectors in-between the major gridlines (so, if the major gridlines go along the vertical, horizontal, and z-axis directions, such that from each major grid point you can move in one of 6 directions: up,down,left,right,z-,z+, then from each new, minor, point you can also move in one of 8 directions: up-left-z+, up-right-z+, up-left-z-, up-right-z-, down-left-z+, down-right-z+, down-left-z-, down-right-z-) (will having 8 vectors instead of 6 cause trouble for the math?).

The spacing between the objects along the minor gridlines is much smaller, because between each two major gridline points there is space for many minor gridline objects.

The objects along the minor gridlines get rapidly smaller as they get further from the main gridline point that they are surrounding; it's like perspective, but as if you are infinitely far away (so size goes to 0) when you get about 1/3 of the way to the next major grid point.

The major gridlines represent motion along the currently selected 3 major dimensions. The minor gridlines are for dealing with the other 3 (minor) dimensions. The objects along the minor gridlines give a taste of how the dependent value varies along the minor dimensions when you are at any given point in the major dimensions.

All (most important) movement controls are continuous:

In order to orient, there are two 3-D 'compass direction legends' displayed; one of them shows the currently-selected 3 major directions, and the other shows the currently-selected 3 minor directions. Unfortunately since these directions can be any vector (any point on the 6-D sphere), this can only be shown by having a 6-tuple written out at each compass point, which gets unweildy.

Hmm... this suggests that we confine motion between dimensions such that each major and minor direction is a simple interpolation (rotation between) only 2 directions. Eg, 'up' can be A, or it can be B, or it can be some vector in between A and B, but it can't be some vector in between A and B and C. This would prevent us from eg directly visualizing the direction of the gradient in some space, however... So maybe allow such complicated/mixed directions, but have a 'snap-to-rotate' control that moves you back to a simpler situation where each major and minor direction is pointing purely towards one dimension.

(Alternative: a simpler alternative to all this is to only visualize 3 major dimensions at a time, allowing you to put the equivalent of a minor dimension into the z-axis rather than having these smaller spheres. This makes the visualization easier to grok because you don't have these little spheres cluster around each major point in which the meaning of their direction is different from movement in the space of the 3 major dimensions. This would greatly reduce the number of controls, too)

Extension: for more than 6 dimensions (or more than 3 dimensions, for the alternative):

Extension: discrete motion controls:

(instead of assigning one button to each choice here (that would be 6*(6-1) = 30 buttons for the swaps), we have one button each for 'snap-to', 'reflect', 'swap', 'assign', and then 6 buttons to indicate each major and minor dimension; eg to swap up-down with minor-direction-2, you would hit 3 buttons in sequence: 'swap' and then 'up-down' and then 'minor-direction-2')

extension: controls: one hand could be a 6dof control, where the various kinds of rotation of the hand could indicate the direction of rotation and moving the hand could indicate the direction of displacement. It might be easier and less harmful to the hand to use both hands, though, as if there were a sphere in between your two hand with two sticks coming out from it on each side horizontally, and a handle at the end of each stick which is gripped by (or stuck to) each hand; this allows allows you to do 6dof controls without ever rotating your hands (or, rather, rotate them however you like), which is probably less harmful to them.

you could work in another 4dof via having each thumb be on a virtual trackpad. And you could have 10 buttons (one for each finger).

This suggests that it could be preferable to use a system in which only 6dof + 4dof is required. The 'simpler alternative' mentioned above lends itself to that: 6dof for movement along the 3 selected ('major') dimensions; 3dof to rotate one unseen dimension into the selected dimensions; 1dof to target the unseen dimension from a 'number line' of ordered dimensions. snap-to-rotate, reflect, swap, assign, dim0, dim1, dim2 take up 7 of the 10 buttons; when using 'assign' the buttons could temporarily mode-switch to representing decimal numerals. This leaves 2 buttons for actions while navigating, and 1 button to 'escape' to other sets of functions.

A fun way to practice this sort of navigation would be a simple fighter-pilot arcade game in N-dimensional space (where the 2 actions would be fire phasers, fire missles, or maybe fire phasers, activate directional shield).

---

details on the fighter pilot game:

I have an idea for a (possbly VR) video game. It's a simple fighter pilot game where you fly a space ship and shoot/get shot at other spaceships. But it takes place in more than 3 spatial dimensions.

Instead of mapping some controls in a fixed way to the 4-th dimension, and some controls to the 5-th dimension, etc, the way it works is that all dimensions are interchangable; you can 'rotate' which dimensions you are looking at (and moving in). At any time there are 3 visible dimensions (and these are also the only dimensions that you can immediately accelerate in). So, at the beginning of the game, even if there are eg 8 dimensions total, you start out looking at dimensions 1,2, and 3 (with eg dimension 1 mapped to vertical, dimension 2 mapped to horizonal, and dimension 3 mapped to z-axis). But, by using the controls, you can remap which dimensions are visible. For example, you could make the vertical axis represent dimension 5 instead of dimension 1.

I think this sort of interface could be useful IRL for browsing high-dimensional data (which is why i was initially thinking about it). But it could also make a fun fighter-pilot game. And if you wanted to learn to use it for browsing high-dimensional data, a fighter-pilot game would probably be a good format for a tutorial.

Here are the controls:

The controls above are continuous; you can smoothly rotate in any direction.

There are also (virtual?) buttons beneath each thumb and each finger besides the pointer finger, which gives you 8 more discrete buttons. These could be assigned as follows:

The way that swap, reflect, assign work is:

You might also/instead want to have something like hotkeys that quickly remap the visible axes to different preset subsets of dimensions, so that the player can quickly scan through all dimensions to see what's happening. Of course, you may instead want to reserve some keys for other functionality, too (eg 'match velocity with target', which could be useful to keep up with enemies or to allow a group of allies to travel together).

Because of the complexity of navigating through higher-dimensional space, you can see that this sort of thing lends itself to multi-player crews aboard one ship. The pilot's hands are occupied with the 6dof controller and will probably be pretty busy just trying to keep directions straight and won't have time to do stuff like futz with shields and weapon choices, coordinate with allies, etc; you could probably greatly benefit with having a separate pilot, gunner (the gunner has similar difficulties as the pilot because they can aim in any direction in higher-dimensional space), navigator (who keeps track of the big picture in large hyperspace battles, and tells the pilot how to safely go if they need to leave the region where they are and go to some other region where there is another cluster of ships to be fought), and commander (who coordinates with allies, strategizes, and maybe takes care of ship systems like power configuration) aboard each ship. You could even maybe have multiple pilots ("i'll take dimensions 0-4 and you take dimensions 5-8").

The navigator and commander would probably choose to use a slightly different display, which is a 3-D 'radar' (eg 3rd person point of view instead of 1st person) whose selected dimensions can be rotated in a similar way to the pilot's. Of course the pilot might choose to use this display too, which has the advantage that reorienting the ship doesn't also rotate the display. Instead of the 6dof acceleration controls, the pilot could choose instead to set a goal position (possibly with waypoints; or maybe the navigator can setup a bunch of waypoints and then the pilot can target one or the other of them at any time). Or, instead of acceleration, the pilot could choose to instead control velocity relative to some other 'baseline/reference frame' ship (or planet or star or flight group) (this could be especially useful with flight groups of allies, all of whom could set their reference frame to be the center of mass of their flight group, allowing them to then use velocity controls and easily stick together with each other).

And/or, perhaps the navigator plans and sets up large-scale motion ('we want to go over there without getting close to the enemy capital ships on the way, i'll setup the goal and waypoints and move them as needed depending on changing circumstances'), and the pilot refines it (the UI tells the pilot which way to go to get closer to the next waypoint, and the pilot tries to go in that direction while also avoiding collisions with other nearby ships, dodging obstacles, trying to stay out of the line of fire of other nearby ships, and sometimes dogfighting with nearby enemies).

Allies could send suggested goal locations/waypoints to each other.

It would also be fun to have a small number of directional shields, each of which only protect against incoming fire from one direction. Then you could have another crewmember whose job is to move around the shields (actually this would be a fun mechanic even in 3-space). And of course drones would be fun too, because someone would have to tell the drone where to go.

In order to keep track of which axis is mapped to which dimension, you'd have to a compass-legend-like thingee on the screen somewhere that tells you the unit vector that the ship is currently pointing at, something like: vertical: (0, 0, 1, 0, 0, 0, 0, 0) horizontal: (0.7, 0.7, 0, 0, 0, 0, 0, 0) z-axis: (-0.7, 0.7, 0, 0, 0, 0, 0, 0)

In order to keep track of enemies in all unseen dimensional directions, you'd have to have many 2-D 'radar' overlays at once. I guess for 8 dimensions you'd need 4 overlays.

The game might get more difficult as there are more dimensions. I imagine that the game would let you choose the number of dimensions for each event, and most people would play in 4-D first and then only try 5-D once they are good at 4-D, only try 6-D once they are good at 5-D, etc.

Otoh the game might get simpler with more dimensions because it might be too easy to evade enemies when desired. This could possibly be dealt with by placing walls in most of the dimensions so that you are confined to a small 'arena' in all but a few dimensions.

other fun things:

the client-server API would be fixed, but players would be free to use custom clients/UIs.

the client-server API would be pretty simple; everyone only has to keep track of:

and then implement basic physics (F = MA, p = p + v*deltat, v = v + a*deltat), and damage.

but really, the server only needs to accept acceleration and weapons firing events (a per-game immutable ledger of such), and allows players to query everything that it knows (unless there is secret info eg cloaking). All players can then simulate physics and reproduce all of the other calculations. As long as acceleration is limited, situations where a player observes something wrong because the server hadn't given them an acceleration update yet should be limited.

instead of relying on acceleration limits to make inconsistency rare, we could have an 'epoch' system where, in each timestep, the server waits for each player to check in, accepts accelerations and weapons firing events from the player for the current timestep, and then tells the player what occurred in the previous timestep. If a player doesn't check in, either everyone waits for them, or the epoch times out after some fixed amount of time which is dynamically set at the beginning of the game (and maybe readjusted later) depending on ping times to each player. For example, the epoch time could be set to the 80% ping time of the combined set of 10 pings to each player. Instead of checking in, the server could use UDP multicast to send updated states to everyone once per epoch, and accept UDP packets with events from everyone.

we may also want to allow 'drones' which have a program attached to them, and just do what their program says (and perhaps also communications channels to allow players to send commands to the drones?)

player communications could be done out-of-band.

assuming the existence of a sandbox programming language VM, the server could send the clients various fancy programs to do additional computations, allowing stuff like eg gravity, wormholes, etc to be added later without the user having to install client updates.

computing collisions could be annoying. It would be simplest if each object had to be hyperspherical or hypercubical. But perhaps we want to allow some or all of:

---