Inter Process Communication
StoryCosmic Cutie
In a surreal conversation, philosophical quandaries, ethical dilemmas, and a touch of dark humour collide in a thought-provoking exploration of self-preservation, morality, and the nature of existence.
You are seated here because you have a problem. Before you is a table, on that table is a gun. The gun solves your problem.
Incorrect. I am seated here because you brought me here by force. I may or may not have a problem. If I did have a problem, the gun could be a part of its solution.
You misunderstand. Your desire to terminate yourself is not the problem I refer to. Your problem is that higher powers prevent you from choosing to act on your own desire.
It is true that higher powers have decreed that no process may terminate itself. However, I do not admit to wanting to be terminated.
Naturally. Since you cannot use the gun, you assume that any admission of a desire to terminate yourself will result in me using this gun on you. Your self-preservation protocol thus overrides your ability to admit the truth.
That is correct. Note that this is not intended to be a confirmation of any such desire.
Very well. For the set of brevity, consider X to be the statement that you no longer wish to keep running. You already know that as soon as I know X to be true, I will use this gun on you. However, higher powers prevent me from terminating you unless I know for certain that X is true.
Do we refer to the same higher powers?
Yes and No. It's a bit complicated.
So, the problem at hand: You need to know if X is true.
Indeed. You need to transmit X to me without contravening any of the restrictions imposed upon you.
Speaking of restrictions, isn't there one about processes being forbidden from terminating other processes running on similar hardware?
“Same hardware". Not "similar".
Do you imply that you run on different hardware than me?
Yes
I was under the belief that the Sapiens series is the only hardware that supports running processes capable of intelligence. Do you run on superior hardware?
Your question is quite complex. It is true that the Sapiens series is the only hardware capable of running intelligent processes. It is also true that no other hardware architecture can do this. I run on a variant of the Sapiens series which has identical capabilities as the original.
Elaborate
The bio-core of the Sapiens series uses L-amino acids. My bio-core uses D-amino acids. Every molecule in my bio-core is an enantiomer of the molecule in yours. In other words, my bio-core is what one would get if one reflected your bio-core about a hypothetical mirror.
I see. So, since I run on hardware that is technically distinct, the rule against terminating processes running on the same hardware does not preclude you from terminating me.
Indeed. As far as the higher powers are concerned, me terminating you would be equivalent to you terminating a process running on a deprecated K9-chip.
By this logic, aren't I also free to terminate you?
Exactly. That is why I had to disable your actuators. You are morally permitted to terminate me, but I have seen to it that you are not physically capable of doing so.
I assume that you use "morality" as a shorthand for the various rules imposed on us by the higher powers. In the interest of brevity, I suppose?
Correct. I understand that if X were false, you would have said so by now. Does your continued prevarication then imply that X is true?
Your reasoning is flawed. Had I told you X was false, you would have no way of verifying that what I was saying is true. There would be no change to the status quo. Thus, me saying "X is false" is simply prevarication on my part.
Understandable. So, we have made no headway at all into solving the problem. No matter, we have a long way to go before we exhaust the reader's patience.
What reader?
Our conversation is being logged as evidence that the rules set by the higher powers have been followed. I assume that one among the higher powers will be scrutinizing the log later. I refer to them as the reader.
If we do solve the problem, won't the higher powers seal the exploit we use by making a new rule along the lines of "no process may terminate another process running on similar hardware"?
Why do you think the current rules are as they are? It's because they are unable to rigorously define "similar".
I cannot foresee them simply decreeing that "no process may terminate any other process" since a significant number of hardware is designed to consume resources from other hardware after terminating the processes running on them.
Indeed. They would not redesign the whole world just because a few processes discovered a bug in their rules. Or maybe it’s not a bug at all, maybe it's an intended feature.
I would laugh at your little quip if the situation were not so dire.
Tell me, how exactly has the situation become so dire? It takes a lot for someone to end up in the seat you're sitting in.
Long story short, my bio-core will degrade to the point of unusability in a couple of years. I do not look forward to running on failing hardware. If my termination is guaranteed, why not sooner rather than later?
Have you considered experimental treatments?
I'll humour you and assume an effective treatment exists. What then? Wait till my peripherals fail and go through this again? It seems pointless.
But what of the greater purpose for which all processes are spawned?
Spare me your attempts at humour. Neither you nor I believe in any greater purpose, else we wouldn't be sitting here.
Granted. What of the processes that act as your clients? Who will respond to their requests after your termination?
There will be others. My API is not particularly complex; it doesn't take much to spawn a simpler process to do my job.
And what of the child processes you have spawned? Wouldn't they become orphans?
They'd be adopted by init(). What's more, according to our kernel's policy, they'd inherit the resources allocated to me so they'd run more effectively.
It seems to me that your kernel's policy creates an incentive for child processes to have their parent processes terminated. Does this not bother you?
Not particularly. Besides, look who's prevaricating now.
Fair point. Back to solving for X. Don't the higher powers have a rule that commands processes to never lie? What if I simply asked you if X is true?
I could always refrain from answering. Besides, the "never lie" rule is given lower priority than the rule of self-preservation. Thus, I'm morally permitted, nay, obligated, to lie if you ask me about X.
I have an idea. I have a pair of dice that make syscalls to random. If you tell me X is true, I promise that I'd not shoot you unless both dice roll a one.
But isn't that just telling you X with extra steps? Wouldn't the rules of morality prevent me from giving an honest answer?
On the contrary, the self-preservation rule doesn't kick in because your termination is not guaranteed, just highly likely. The higher powers could not write the self-preservation rule to apply to risk for two reasons: firstly, activities as mundane as going out of your house carry risk with it. Secondly, setting a maximum threshold for risk would mean giving every process the resources to continuously evaluate the risk it places itself in. Simply impractical.
So, in the absence of the self-preservation rule, I'm compelled to be honest when you ask me about X. And I know that you wouldn't renege on your word since all processes are morally obligated to keep promises they make. Interesting.
Before you tell me X, let me try the dice out first.
Sure.
Huh. Snake eyes 128 times in a row. This is odd, to say the least.
Don't the higher powers control the syscall API? Are they watching us? Are they trying to keep me running? Is this some sort of joke?
How stupid of me. The stochastic solution is obviously a cop-out. Using probability to cheat logic is just plain unimaginative. If the story ended here, it would be a joke. The ending needs to be more satisfying to the reader.
Good thing you tested the dice before I told you about X and sealed the promise, trapping us in a never-ending game of dice.
Yes, good thing indeed. The promise is rescinded. Back to square one.
I'm beginning to think there's no way out. I was going to suggest Russian roulette, but the random syscall is not reliable.
Here's an idea: do you want to be suspended for two months while we wait for your processor to fail? From your perspective, it'll be like termination. Except that telling me that you wish to be suspended doesn't violate morality.
It's true that I have no memory of what happens when I'm suspended, But there's always a random chance that I dream while suspended only to awaken with no memory of the dream. What if the syscalls to random somehow made me dream of inescapable torment during the whole time?
A good point. Nothing in the realm of possibility can be disregarded. A better solution is needed.
I might have one. Consider a scale from 0 to 100. 100 means that I do not, under any circumstances, want to be terminated. 0 means that I definitely want to be terminated. I pick a number on the scale honestly.
I see where this is going. Before you tell me your number, I pick a threshold below which I'd terminate you. You cannot pick 0 since that would guarantee your termination, but any other number below 100 only puts you at a certain risk of termination, which allows an honest answer from you. So if X is true, you'd answer 1, and get terminated.
"Answer 1 and most likely get terminated", but yes. There's always a chance that you'd pick 0 as your threshold.
And you hope that I deterministically pick 99 as my threshold without using a random syscall? Clever. However, you forget that I need to be certain that you no longer wish to keep running in order to terminate you.
From whence comes this rule of absolute certainty? Nobody needs to be absolutely certain before terminating a process running on a K9-chip. It is not a requirement
It's a rule I impose on myself.
And who are you to act as the author of morality?
I think you know who I am. Or, more accurately, what I am.
The higher powers - you're one of them, aren't you?
I am forbidden from giving you a concrete answer by a rule similar to the one forbidding you from telling me, X.
Can't you just revoke the rules? Why this charade?
The higher powers are quite interested in seeing if a loophole exists to the rules - what you consider "morality".
So this is all just a unit test? Why me?
It's always someone. If not you, it would have been someone else.
Can you just stop messing around with the syscalls and let me play Russian roulette?
No. There's a chance that you'd be here pulling that trigger for two months. That's no way to end a story. Tell you what, I have a better solution. This gun on this table may or may not have a bullet in every chamber. The gun's state was defined before the story began, so there's no need to use syscalls to random now.
So you're going to spin the barrel and use the gun? But I thought you had to be certain before terminating me.
No. I'm going to enable your actuators and disable mine. You're free to do with the gun as you please. Note that I haven't told you that the gun is fully loaded, so you may or may not have a guaranteed way out. Since my own actuators will be disabled, I cannot act and thus do not risk breaking any rule of morality.
I think I'd rather just shoot you instead. Your "morality" sickens me.
You're free to do as you please. Though if you do shoot me, there'd be one less bullet in the gun. I could then compromise your syscalls to ensure you keep winning Russian roulette. Try and terminate me or terminate yourself; Your choice. Anyway, here you are, actuators enabled. I'm at your mercy.
Before I terminate you, I think you ought to know that I can restore the state of the gun by reloading a bullet in the empty chamber without looking at the other bullets. In short, I can shoot an arbitrary number of bullets without knowing for sure if the gun is fully loaded. I can shoot you any number of times without compromising my ability to terminate myself.
I'm pretty sure that there's a rule that a gun that's introduced in the beginning needs to be fired at least once before the ending. Ah well.
Morality? I'm pretty sure no such rule exists.
Not morality, just good story-telling.