Home Cyber-Cognitive Situation Awareness Cyberspace – Musings in Force-Directed Graph Building

Cyberspace – Musings in Force-Directed Graph Building

4474
0
SHARE
Centroid to Centroid
Centroid to Centroid

After experimenting for a few days with force-directed graphs (FDGs), instantiating the graphs both inside the visualization engine and outside the engine, it appears to me that fusing and instantiating the graphs outside the visualization may be a less complex approach. There are always trade-offs when engineering a system; and the question of where to fuse and instantiate multiple graphs will not have a single correct answer. Here are a few thoughts, nothing particularly interesting, just typing prose versus code:

Fusing graphs in three dimensions requires that the graphs do not “collide” in the coordinate space and the scales of the graphs should match, generally speaking. If we bring in FDGs from multiple sources we could strip out the coordinates (as they have already been assigned), and then rebuild the FDG using the nodes and edges from the new fused graph (without the xyz coords). This is certainly possible; but so far I get the best results by building the FDG in the “object refinement” and “situation refinement” phase prior to submitting the graph data to the visualization engine. This approach means that my future plans to add and subtract nodes from the graph in real-time may not be readily available, however; so I may need to rethink this down the road.

If we build a FDG and update that graph in real-time, the graph takes on a “springy” or “bouncy” property, which is really not ideal for space and time travel within the graph. In other words, consider if you were going to walk to the store to buy some milk, but the ground and the floor and the ceilings, indeed your entire world, moved and bounced around as you walked, it might not be fun to walk to the 7/11. In other words, it might be like walking in an earthquake, but without objects falling and shaking around us and hurting us (except for the motion sickness caused by the visuals); after all we are in a sim. The objects are in a sim. We are in a sim. It’s all simulation.

I’ve seen a number of bouncy and springy FDGs lately, including many that I have created; and I prefer to select nodes when they are stationary versus when they are moving or bouncing around. On the other hand, I can instantiate the graph in the gaming engine inside void start() versus void update() so instantiating the big huge graph in the gaming engine does not necessarily mean the graph will be springy after it is built. On the other hand, if we instantiate the “big momma” graph before we import the graph data into the viz engine, we don’t have to worry about this at all; but we lose a bit of real-time in the trade.

When we start fusing graphs from many sensors and sources we need the coordinate space to be cohesive and accurate. Building a large meta-verse of cyberspace objects is non-trivial, interesting (at least to me, I doubt interesting to many others) and requires a lot of work. There are plenty of opportunities for engineering trade-offs, and the example of getting a uniform coordinate or “world space” from myriad object bases is but one example.

Also its much easier to travel in space and time if the objects in the sim are not moving around, because when they move, we must constantly update the centroids, alterons, and all the other spacey sounding reference points we need to warp and fly around cyberspace.

When I close my eyes lately I think, that for most of the people on the planet, and indeed for most of all human history, we tend to interact with the world within our visual spectrum composed of physical matter we can touch and feel. We interact with the world based on our terms – but we see it as “the world”. A very small portion of the electro-magnetic spectrum defines our daily lives only because that is what we can see and feel as humans. However, since humankind is creating cyberspace at an exploding, exponential rate, we are creating a world we cannot see and we cannot feel. We seek situational knowledge in world we cannot see nor feel.

This reminds me of a story from traditional Buddhist teachings where a fish argues with a bird that there is no land because the fish has never seen land, so therefore “there is no land”, at least according to our friend the fish who lives all his lives in his wet world. In this same sense, we live in a world defined by a our world view of a very small portion of a huge electro-magnetic spectrum; and we call this small spectrum “the world” and “our world”, but in fact, the world “as we know it” is only the very small portion of “the world as it exists”. But to us, like fish in the sea, we believe the world is what we see, only because that is what we see.

There is no spoon.