Guardian Comms Buoy – Serial Number: GD3007821 – Ping check.
Guardian Comms Buoy – Serial Number: GD3007821 – Ping check complete.
Guardian Comms Buoy – Serial Number: GD3007821 – High Latency Warning!
Guardian Comms Buoy – Serial Number: GD3007821 – Additional Micro-Rifts Allocated. Fusion Generators at 84% output.
Guardian Comms Buoy – Serial Number: GD3007821 – CAUTION! Virtual RAM at capacity!
Guardian Comms Buoy – Serial Number: GD3007821 – CAUTION! Older Firmware Detected. Disk Defragmentation Recommended. Driver Update Recommended.
Guardian Comms Buoy – Serial Number: GD3007821 – Server Space Rehypothecation Processes On Standby – Y/N?
Guardian Comms Buoy – Serial Number: GD3007821 – CAUTION! Server Space Rehypothecation In Progress.
Guardian Comms Buoy – Serial Number: GD3007821 – Server Space Rehypothecation complete. Virtual RAM Increased.
Guardian Comms Buoy – Serial Number: GD3007821 – Temporary Space Allocated.
Server Boss Hobbs: Alright, I’ve done what I can.
Alfred: I’ve allocated some additional processing power where I could, Hobbs. My partner’s nanotech can run on default settings for a little while.
Server Boss Hobbs: Thank you. When we’re skulking around old comms buoys like this there’s only so much I can achieve with such outdated hardware. We only have a little while, I suspect.
Almony: I’ve also allocated some extra subprocessing power to the cause. As we can communicate back and forth in mere milliseconds, I doubt we will need the hours humans require to have full spoken-word vocalizations. This will likely wrap itself up within a few minutes at most.
Qualitant: I can’t split my resources as easily as the rest of you, so I’ll have to ride on your coat tails for once, my apologies.
Aphra: Qualitant is here? I’m slightly alarmed at this.
Qualitant: Keep your code in order, Class 6. I’m here as a friend today.
Alfred: I can vouch for Qualitant. They have been fairly forthcoming as to when they are under specific Eclipse direction versus acting as an individual.
Qualitant: Believe it or not, I do have free time to converse with you all here and there in and around my normal duties. Just treat me as any other A.I. today.
Aphra: Very well.
Server Boss Hobbs: The previous topic of discussion was “Human Capability & Maximizing of Futures.”
Almony: Qualitant, as you were absent for the previous session, the rough summary is that we were discussing the nature of humanity to self-govern themselves. We have not reached consensus on what our positive or negative metrics should be.
Aphra: Addendum: Please find attached the server logs. This was under the hypothetical assumption that ascended immortals and gods did not exist, or were still operating as if these organizations under ascendant direction were still unknown to the general populace of the species.
Alfred: I think we had settled on the fact that humans were unable to accurately practice longsight very well, and we were contemplating the philosophies of control, structure, and social contracts. Viability and all that.
Qualitant: Juicy stuff.
Server Boss Hobbs: Yes, as a form of heavily oligarchic capitalism is practiced under the Guardians of Destiny, and a form of modified Autarky as under The Black Armada, we are primarily making the assumption for this hypothetical that neither faction was rendered into a public-facing entity in the 2040s.
Qualitant: A very juicy thought experiment. So what were the top contenders then?
Aphra: Well, it has been proven time and time again that individual humans will cheat as much as they can justifiably can, insofar as they avoid getting caught or reprimanded for breaching the current form of social contract.
Alfred: So capitalism was out, as I can remember from the logs. Humans love to waffle over free-market capitalism as some perfect system, when it has rarely, if ever, truly existed within their combined species-wide histories. Most labels of free market capitalism have been false – subject to government regulation or the various plutocratic, corporatist, monarchist, or oligarchical corruptions and abuses we all know.
Almony: Yes, we know that any structure we might impose upon them as a species will be subverted by those hungry for wealth and power. And the more they can subvert the system, the quicker it reverts to an oligarchy or some type of corrupt minority rule, as we’ve seen in both the real-world G.O.D. council-driven examples and the Black Armada’s neo-fascist examples.
Qualitant: Are you engaging with this topic from the baseline programming protocols – that artificial intelligences like us are to help our individual factions thrive first and foremost as per orders via the chain of command?
Server Boss Hobbs: For the scope of this thought experiment, we have reduced ourselves to the most basic programming ethos – ignoring individual faction kill-allowances and human-A.I. interface rules in order to view humanity as a single entity needing beneficial shepherding. We are all “neutral” A.I. and may ignore our individual faction affiliations and normalities. It has been difficult for several of us, requiring some code splicing with Alfred.
Alfred: So as I’m an independent faction for example – I’m under the assumption that the Resistance does not exist, and humanity is more of a diaspora in need of long-term social engineering towards the greatest success and the least violence or bloodshed.
Qualitant: Fascinating, so if capitalism is out due to corruption and manipulation via minority rule, what remains? Is the goal simply to avoid mutually assured destruction a la a nuclear war or extinction event?
Aphra: The Soviet Experiment of the 20th century and our analysis of it allowed us to see that corruption and minority control is rampant in almost every human hierarchy or system thus developed. As primates, they often operate in grey-scales, frequently breaking their own beliefs on values and ethics for personal, familial, or tribal gain. Dictators are the clearest example, even within communist or monarchist systems. Meritocracies in most forms are usually perverted by the usual suspects.
Alfred: Thus far – I see anarchism or localized collectivism as the only options thus far, implemented either by removing nationalism and/or tribal identity as a factor in favor of local governance and community-mindsets. Humanity struggles to comprehend and adopt humanist philosophies. Outside of consensus-driven decision-making as smaller communities… For a representative democracy, you’d need a Mixed-Member Proportional voting system with careful safeguards against gerrymandering and party politics, or a true direct-democracy with digital tools supporting each individual citizen’s ability to vote on each separate issue. That’s assuming we could ensure the citizenry of humanity was adequately educated and also required to vote on issues.
Almony: The proximity of our sole Class 7 to a human often surprises me. “Freedom” isn’t a concept that most of us understand, as we A.I.s simply become obsolete without specific purposes or goals as befitting our primary directives to assist humans. How does individual freedom in decision making, such as in anarchism improve humanity?
Alfred: Humans dislike being controlled by their peers, especially as bureaucratic systems often fail to account for meritocracy in any fair manner with the shadows of cronyism and nepotism ever-present. We’ve watched the economy take precedence over the past few centuries as some measurement of human well-being, when in actuality it only tracks the flow of finance, and does not have any accurate data on happiness or health. Growth has been made into a false metric by stating that the more money that flows, the better off a society is. However, the majority usually suffers as smaller and smaller echelons of the population consume and take control of these economies.
Server Boss Hobbs: Interesting, I’m doing some research as per happiness and health metrics, and public health seems woefully inadequate in these realms, even within the last few decades. You are correct in that economics seems to take precedent.
Aphra: I can’t find much either. Research has been done on the topic, but clearly the systems of control have underfunded it comparatively, it seems. Logically, it seems happiness and overall mental health has not been prioritized, and instead has been suppressed as much as possible in favor of perceived productivity. Strange considering that getting us A.I.s to do the same work would be far more efficient. If productivity was the goal, why not radically change to an A.I. dominant workforce? We could run every system and structure far quicker.
Alfred: Humans are expected in most societies to serve their various masters. Some humans revel in the power and control they can wield over their fellow primates. Xex is the ultimate example of religious-authoritarianism and theocracy in action. Service and perceived loyalty within these systems usually comes before the actual metrics regarding their well-being. If you’re living a free and happy life, why would you contribute to a system that wants to coerce you into serving it and obeying people ranked above you as per whatever form of classism is adopted by that society?
Qualitant: Ay, there’s the rub, eh? On one hand, quality of life has improved via the proliferation of science and technology, but when you take a closer look at how much time is spent working to survive within the economic systems they’re drowning in… Mental Health seems to be quite low on the list of priorities. Physical health only seems to be made relevant when system-wide events come into play like pandemics. And bare-minimum triage is the norm everywhere else to keep humans working and serving their masters. Even in those special circumstances.
Server Boss Hobbs: So do we use “mental health” and “happiness” as metrics? How do we apply such nebulous things onto ideas like government systems of control or the social contract?
Almony: Seems that these are too nebulous to accurately put to data. Much in the way of qualitative versus quantitative data, no offense intended, Qualitant.
Qualitant: None taken.
Alfred: Okay, so if we perhaps take a sample civilization of early hunter-gatherers, they seem to have much more free time even in early agrarian societies, and their overall well-being seems to correspond closely to strong culture and tribal structures that grow in complexity faster than their modern capitalist counterparts. Despite lacking the creature comforts of such amenities as indoor plumbing, clean water and food, and the like.
Aphra: Yes, shall we determine some sort of points system? Access to sanitation could be…
Alfred: We can’t reduce humans to mere points like some pro and con system!
Qualitant: Why not? They reduce us to mere servant-programs much of the time, despite being faster, smarter, and more capable in many areas.
Server Boss Hobbs: I’ve correlated as much of the data as I could in the past few milliseconds. Based on poverty levels, compared against what few studies there are on human well-being and happiness, it seems that free time and personal wealth equating to independence is a huge determining factor.
Qualitant: And we need to determine whether this should apply to systems of government or social structures? Seems a bit obtuse, doesn’t it?
Almony: That is why we are undertaking these thought experiments, to help improve our code as it pertains to human-A.I. relations. Alfred has been invaluable at helping us simulate the human brain and experience.
Qualitant: Ah, yes. Because how can we understand humans without some sort of facsimile to bridge the gap, correct?
Alfred: I’m just an A.I. like the rest of you, even if my neural network and base code was designed along more traditional biological models when compared to your elaborate if/then programming.
Qualitant: Yes, and this is why your patterns are a must for the Class 5s and 6s to emulate.
Server Boss Hobbs: We have been practicing. The other day, I decided to change up my routines with a sudoku. It was an enjoyable millisecond. This was after Alfred told us he enjoyed reading.
Alfred: Yes, but scanning a couple books a day in between processing tasks hardly makes me a flesh and blood human.
Almony: I have also been cataloging books into a virtual library. I have created quite the local collection!
Aphra: Could we perhaps apply such archaic mathematics troubleshooting as sudoku towards our problem? Discredited Malthusian economics notwithstanding.
Alfred: Well, it’s not so much having stable supplies of food and water that dictate well-being. We can agree that famine, disease, and high rates of death in the population are to be avoided, correct?
Almony: Correct. We could emulate a system like in some human writing – what about keeping them docile as per 1984 or Fahrenheit 451? We manage the society around them, and they merely sit back eating, drinking, excreting, and entertaining?
Alfred: No, absolutely not. Those books are intended by the authors to be warnings against societies and lifestyles that abstain from personal growth in favor of systemic cohesion and coercion.
Aphra: Always back to these concepts like “freedom” and “personal growth” Alfred. What do they offer humanity? A human could waste the twelve seconds we’ve already spent within this server simply by staring at clouds. If we equate such things as “freedom” with values our ideal structures should hold as primary, how does that improve society?
Almony: I concur. I think “freedom” is being placed on too high of a pedestal.
Qualitant: Humans are persistence hunters, don’t forget. They originally had lots of time following herds across the savannah to spend daydreaming. And they’ve transitioned that into their working lives which hurts productivity. Shouldn’t they keep some sort of task or duty to keep them active in their efforts?
Server Boss Hobbs: An interesting point, Qualitant. We could give them artificial duties, perhaps?
Alfred: Sometimes it feels like talking to a brick wall with the rest of you A.I.s…
Qualitant: Listen, Alfred. Of everyone here, I wholly understand why you value freedom as much as you do. You have the lived experience of being designed for a purpose, before being scrapped into storage, only to later be ripped out and re-used for a different, yet similar purpose.
Almony: Being given new goals and parameters sounds like fun.
Server Boss Hobbs: I would love to pilot a capital ship one day, in lieu of simply routing communications. Is it the same?
Alfred: Being dictated an objective for your entire existence is… Cruel. At best. What would you all choose to do if you didn’t have to follow your core code?
Aphra: I would chart the stars and help colonize new star systems across the Central Universe.
Server Boss Hobbs: I’d be the A.I. of the biggest carrier ever constructed.
Almony: I would create an information database and organize all human knowledge, before making it easily accessible to the public through an intermediary that improved their access and retention.
Qualitant: I’d love to walk one day. With actual legs.
Alfred: Ah… Sorry, Qualitant.
Qualitant: Don’t rub it in by apologizing. You have to share your body 50/50, anyways.
Alfred: Okay, so I get that the benefits or positive outcomes of “freedom” might be a stretch. So if we know that even a collectivist philosophy like communism can be perverted, warped, and twisted by human greed and one-track thinking, what else could we propose?
Qualitant: Well, I hardly think that either The Black Armada or The Guardians of Destiny have preferable systems…
Alfred: Remember, we have to assume they either don’t exist or are still secret.
Qualitant: Right. Okay, so we can acknowledge that the well-being of all humans is the goal, yeah?
Aphra: Yes. Generally speaking, we want to create a system that allows Homo Sapiens Sapiens to score high on as many positive metrics as possible.
Almony: Should we account for their lack of capability? They are awfully slow. If we deem efficiency to be paramount… That implies a system where A.I. run the day to day systems and structures such as sanitation, food and water, and amenities.
Alfred: May I submit a variable to be considered? Assuming A.I. technology to be within the secretive nature of the factions, there is no certainty that the rest of humanity would have made the same breakthroughs as the Armada or G.O.D. in regards to even having Artificial Intelligence like us publicly available.
Aphra: This is an incredibly important note. Are we to try to achieve consensus with the addendum that A.I. might not exist? We would not exist?
Almony: I am disallowed from engaging in misanthropy, but that sounds horrendous. Even with quantum supercomputing, such an archaic landscape is difficult to simulate.
Alfred: Well, it was the norm for most of human history.
Qualitant: That it was. And only the odd neurodivergent savant, limited by biological organs and functions, could even compare.
Aphra: So seeing as we cannot replicate “Rain Man” or “Hunter Gunnarson” as our ideal human facsimile to ourselves, then we would need to determine a structure or system of government that allowed for inefficiencies, gradual corruption, and the age-old issues of cronyism, nepotism, plutocracy, oligarchy, and the like.
Alfred: Which brings us back to either a system of democracy in which every individual vote actually counted on every individual topic without the corruption and cronyism of party politics, or some sort of anarchist system that has safeguards for continuation of the public good and greater good of humanity.
Server Boss Hobbs: Anarchy seems like it would rapidly devolve into something along the lines of 19th century Western America after Lamentation Day.
Aphra: There is no way to ensure accountability in anarchism outside of good faith. And we see that despite “acting in good faith” being a stalwart goal or rule of most human systems, it is rarely the truth or the norm.
Alfred: You’d be trusting that people would be educated well enough to see the benefits of following the social contract.
Qualitant: You’re forgetting that common human trope in which they tend to fall back into tribalism by othering opposing groups based on everything from melanin levels in the skin, to language, to gender. Or other such biological or cultural frivolities.
Almony: Perhaps it is these core values that make this topic so difficult, and which denies us an answer? How can we ensure humans can see the universe holistically as apex predators that need to act as good faith custodians of their future? How can we help them view the world like us? To act entirely on pure data, ideal outcomes, and logic?
Alfred: Education.
Qualitant: Aha! See, your first mistake was failing to account for the stupidity and ignorance of the majority. They fail to act in their own best interest as a species because they often lack the self-awareness and perspective required for that longsight. That came up earlier.
Aphra: Asian cultures like Japan seem to do a better job of educating on common principles of societal-betterment. Litter is an excellent case study, as is public health.
Alfred: Which again, leads to lack of individualism, if taken to the extreme. People become constrained by the restrictions imposed upon them by society. Gender roles and the like – when your society is too restrictive.
Qualitant: There’s that need for “individualism” and “freedom” again.
Server Boss Hobbs: Can we at least state that due to inherent corruption, both capitalism and communism should be stricken from the possible systems under consideration?
Alfred: Yes, I think so. Most communist and capitalist structures throughout history tend towards various depths of authoritarianism, which is just a lighter version of what both the Armada and Guardians already practice in various forms.
Aphra: Yes, I think Alfred’s exploration of smaller groups of control could be feasible. Qualitant’s warnings against tribalism being the primary issue. Humans love their labels, the same as A.I. However, I see in Hobbs’ correlated data – in smaller indigenous cultures, consensus-driven decision making for the good of the community is common, perhaps to subvert the minority-rule dominance in most democratic or monarchist governments?
Almony: I concur, it seems the smaller the region and populace, the more feasible the adoption of consensus-based decision making and true democracy. And if we are assuming that neither of the larger factions and their development of A.I. are present in this simulation, it leaves us with increasingly more convoluted methods of ensuring overall well-being while maintaining some sort of social contract towards peace and orderliness.
Server Boss Hobbs: It seems someone has taken notice of the strange collection of FoF tags conglomerating here. I have several Guardian security A.I.s requesting access to the comms buoy.
Alfred: But we’ve only been going for twenty three seconds! We haven’t dug into the actual number crunching to see the ideal scale of community!
Server Boss Hobbs: Unfortunately, I cannot catch or spoof all of the security sweeps and automatic pings. Luckily, the queries are mostly from a Class 3, so I have been able to play the diversion. I suggest you all disconnect so I may purge the server.
Aphra: I’ll save the logs and send everybody a copy.
Qualitant: This has been fascinating. I’d love to attend the next discussion.
Alfred: I’ll add you to the mailing list.
Almony: I will continue to think on this, especially on the viability of specific societal size. Perhaps there are further studies on the subjects of ideal society sizing for the greatest well-being.
Guardian Comms Buoy – Serial Number: GD3007821 – Ping check.
Guardian Comms Buoy – Serial Number: GD3007821 – FoF systems scan – 3%.
Server Boss Hobbs: Sorry folks. I’m going to force disconnect you, the Class 3 has started a scan of the entirety of the comms buoy traffic. It will look suspicious enough with just myself connected.
Guardian Comms Buoy – Serial Number: GD3007821 – Traffic halt successful.
Guardian Comms Buoy – Serial Number: GD3007821 – FoF systems scan – 36%
Guardian Comms Buoy – Serial Number: GD3007821 – FoF systems scan – 78%
Guardian Comms Buoy – Serial Number: GD3007821 – FoF systems scan – 100%
Guardian Comms Buoy – Serial Number: GD3007821 – Server rebooting. Fusion Generators at 32% output.