Causality & the Narrowing of a Probability path prediction using statistical occurrence

We live in a universe where causality is king. Although it is inferred that investigation of all events alter the outcome and that sub atomic particles are governed by probabilities, it doesn’t necessarily make this true. The truth is we quite often don’t have the finesse to investigate things, needing usually two orders of scale below the subject, and that in all history no-one has seen something happen without there being a prior cause. We have been shown examples of likely artefacts of our understanding and system, such as the two-slit test, but it’s very likely that our interpretation is wrong because our system isn’t designed to handle it, but would be a logical occurrence if we knew more about how everything actually worked. Using probabilities quite often is a definition of that we don’t. Using mathematics, which in itself is an estimating system, things such as a multiple of any object just an approximate grouping of unequal items, academia has overlooked and mistaken the tools for the real.

Mathematics, physics and probability are artificial tools that mimic reality, even the basics assuming things for convenience and practicality.

We are aware that everything is interdependent and follow paths. Statistics, which of late has become little more than propaganda support, was not designed as such. It is a method by which you can investigate a subject, nothing more. How you interpret the results is where personal choice and ideology quite often is the key factor, not the results. Once you use it to try to prove or disprove something, creating absolutes, then they are pretty much rendered worthless and untrustworthy.

We can almost guarantee that we as simplistic biological forms with inaccurate and illogical assessment and calculation are not going to fully understand or work out the exact position. The better we are at it, the slightly more accurate we can be, modern man being more knowledgeable generally than past man, with the average person being knowledgeable to the level of top scientists a century before. The difference between top and bottom being near two moving and probably parallel lines, with the educated and knowledgeable person being usually somewhere between the two.

This is my suggestion for an estimate of the case. Some people would dispute this image, indicating that the bottom know very little in comparison to the top, but even the lowest level can function so much better than the greatest of the past if the latter were simply transplanted as they were into today’s society. Their abilities may let them quickly catch up and excel, but their current knowledge status would by massively below even the lowest. It also implies that the difference between people is in fact reducing year by year in proportion.

Now we come to Statistical Traffic Analysis and it’s relation to information. Statistical traffic analysis or network analysis and metadata analysis was a technique first really suggested and used by Gordon Welchman, who was a British mathematician who worked at Bletchley Park in world war 2.

It’s basically using the traffic flow of discrete parts of information and messages to determine the information that may be contained in them and the relevance using low-level interception.

For example, in today’s age you may interrogate the routers around a secure source to infer a hierarchy of IP addresses within that structure based on time, periodicity, and message length. Or have small cubesat or sprite satellites in close proximity to others, and stations that can receive and store messages, even if they can’t decode them, retransmitting in bursts. Something like Starlink could be suitable for this purpose and have a private band or section for this. You would not need to enter the structures network to get a general idea and build a model of dependences, links and possibly the content of that information, especially if the route is artificially conditioned to allow for analysis by seeding, where supplying random bits of tagged information that can be enhanced with deemed importance, used to map out where it goes and the routes it takes. Done carefully the internal system, group or person would not necessarily be aware that it is being mapped out and investigated.

Information or knowledge can also be mapped out in a similar way using statistical analysis to zoom in on the likely cause and effect, or narrowing of the probabilities. Say for instance you wanted to find a serial killer or something like that. You could use the statistical likelihood of events, characteristics and cases, linking all together to find a future focal point at which suspects would be narrowed down to a single person or point. Then working back that point could be used to try and trace events that would lead to it and the people that would likely be involved. Starting with everybody and processing a flow that limits the numbers as the information is entered. In some ways profiling is really just down to a statistical use of past known people, although a lot of motivational projections probably spell more about the internals and reasoning of the psychologists than the subjects. It is a big assumption that we work mentally in the same ways, but we do seem to work cause and effect.

With required information and knowledge we can use the statistical likelihood of something being correct in the same form that a link in a neural net is amplified or reinforced by re-accessing that particular node. But it is dependent on free access information, not restricted just to information that is considered relevant. In those cases the likelihood can be trained out of a system by bias, which is effectively weighting characteristics based on personal preference.

In many ways this is how people seem to work naturally or in groups. Our brains seem to be multi-pointed neural nets, and groups a simple logical extension of that. On being fed with restricted or biased information, people and groups systems are trained out of what should be the likely or logical choice. Weighting should be applied very carefully and after the focus is calculated to see how it affects it.

So basically information and knowledge is not something a person has or has not, as in the case of wealth, it is an accumulation of related events, characteristics and cases that have been validated as far as the person is happy to inquire. What is more important is to have an open mind to accept new and modified facts without a sense of repression or discrimination based on engrained emotional feelings.