The OpenSense Domain

Waves Propagating Through Space

Communication is essen­tial in allow­ing indi­vid­u­als to coop­er­ate in group activ­ity, espe­cially if the indi­vid­u­als dif­fer in their roles or char­ac­ter­is­tics. Having open access to infor­ma­tion greatly boosts the pro­duc­tiv­ity of a group – in fact, this is the moti­vat­ing prin­ci­ple behind the inven­tion of patents, the inter­net, and open source soft­ware. Similarly, mem­bers of a team are expected to openly share their thoughts and ten­den­cies so that the team can make up for each other’s weak­nesses. In the OpenSense domain, we dis­cuss what hap­pens when impor­tant facets of generic thought are openly expressed and eas­ily sensed by oth­ers in the envi­ron­ment. We will talk about how gener­ics in this domain tend to form friend­ship groups, share infor­ma­tion about them­selves, and engage in play activ­i­ties designed to uncover highly var­ied aspects of each other’s per­son­al­i­ties. The senses in this domain are anal­o­gous to human emo­tions and the invol­un­tary facial expres­sions / body lan­guage used to express such emo­tions1, but in this post I will mostly focus on the OpenSense dynam­ics in its pure form and only use human behav­iors as illus­tra­tive examples.

The Basic Input and Output Senses

Firstly, in the OpenSense domain we have a set of basic senses that come in input-out­put pairs. The exact nature of these senses can be abstracted away, and for the pur­pose of today’s dis­cus­sion we can just think of them as SenseIn1, SenseOut1, SenseIn2, SenseOut2, and so on. The out­put senses such as SenseOut1 can be trig­gered by arbi­trary cri­te­ria. For exam­ple, the Pain, Gain, Rivalry-selfish and Rivalry-selfless senses can may very well serve the role of SenseOut1 or SenseOut2, and if this is true of many gener­ics in a group we would see the dynam­ics of the Rivalry and OpenSense domains mix together. Different gener­ics can have dif­fer­ent trig­ger con­di­tions even for the same out­put sense. In the OpenSense domain, gener­ics will com­mu­ni­cate with each other to learn how their out­put senses work, so it’s actu­ally more illus­tra­tive to assume that the trig­ger con­di­tions are not all the same. Regardless of the trig­ger con­di­tions, a generic must invol­un­tary express them­selves in a phys­i­cally detectable way every time they expe­ri­ence one of their out­put senses. The expres­sion is dif­fer­ent for each out­put sense, and if a nearby generic paid atten­tion to this expres­sion they would expe­ri­ence the cor­re­spond­ing input sense. For exam­ple, we can say that a per­son who expe­ri­ences an out­put sense HappinessOut would invol­un­tar­ily smile, and that a nearby per­son who notices the smile would expe­ri­ence the HappinessIn sense. Through this mech­a­nism, gener­ics invol­un­tar­ily com­mu­ni­cate the work­ings of their out­put senses.

The Mirror func­tion is still very rel­e­vant in the OpenSense domain. A scene may encap­su­late infor­ma­tion about what senses a generic has expe­ri­enced, and the Mirror func­tion can trans­form the pres­ence of these senses into another generic’s per­spec­tive by con­vert­ing input senses to out­put senses or vice versa. Suppose a per­son is happy and sees another per­son frown, mean­ing that he is expe­ri­enc­ing both HappinessOut and SadnessIn. He can apply the Mirror func­tion to view the scene from other person’s per­spec­tive, and in the mir­rored scene he expe­ri­ences SadnessOut and HappinessIn, i.e. he is sad and is see­ing another per­son smile. It should be noted that non-open senses can­not be mir­rored; if a scene con­tains infor­ma­tion about a non-open sense, then apply­ing the Mirror func­tion to the scene sim­ply results in Failure. The scene must first be altered with another func­tion so that the non-open sense is removed or con­verted into a dif­fer­ent form in order for the Mirror func­tion to work on it.2

As usual, we may have sim­ple eval­u­a­tions based purely on an out­put or input sense, and hence a generic can have agents that sim­ply try to max­i­mize the pro­duc­tion one of these basic senses. There is noth­ing remark­able about an agent that encour­ages the pro­duc­tion of an out­put sense, since the con­di­tions for trig­ger­ing out­put senses are arbi­trary and unre­lated to the OpenSense domain. Agents that encour­age the pro­duc­tion of input senses are more novel. For such an agent to suc­ceed, it must learn about the ten­den­cies of other gener­ics around them. As a pos­si­ble start­ing point, the agent could deter­mine what scene would pro­duce the cor­re­spond­ing out­put sense, then mir­ror the scene to deter­mine what must be done to pro­duce that sense in oth­ers. However, this strat­egy can fail if the trig­ger con­di­tions for the out­put sense are dif­fer­ent in other gener­ics. If the agent was only ener­gized by this input sense and the generic did not have the required knowl­edge of oth­ers around him, the agent would quickly run out of energy and fade into insignif­i­cance. From a cold start, an agent ener­gized by an input sense wouldn’t ever take off.

The Motivation for Sharing and Inducing Expression

It appears that sim­ply hav­ing input / out­put senses is not enough to cre­ate gener­ics that proac­tively share and learn each other’s ten­den­cies, espe­cially in a com­mu­nity where the trig­ger con­di­tions for the out­put senses dif­fer across indi­vid­u­als. In the OpenSense domain, we make things more inter­est­ing by intro­duc­ing two other senses. These senses encour­age a generic to learn about oth­ers in their com­mu­nity or place them­selves in envi­ron­ments more con­ducive to learning.

Before intro­duc­ing these senses, recall that a generic learns about the work­ings of their envi­ron­ment through their lens, and that the lens has no inher­ent “motives” other than pre­dic­tive accu­racy. Therefore, a causal esti­mate pro­duced by the lens can be thought of as a cer­tifi­cate show­ing that a generic has learned some­thing about the cause-and-effect rela­tion­ships in the envi­ron­ment. After all, if some phe­nom­e­non is not well under­stood then the lens can­not pro­duce a good causal esti­mate when per­form­ing event inter­pre­ta­tion. We can mea­sure the qual­ity of a causal esti­mate by see­ing how accu­rately it can be used in causal pre­dic­tion, and we can mea­sure how strongly a causal esti­mate “binds” to a piece of infor­ma­tion by test­ing how much the lens’s pre­dic­tions are affected by the pres­ence or absence of said piece of infor­ma­tion. Let’s abstract away the details and sim­ply assume we have a func­tion that can achieve this func­tion­al­ity. We will need the fol­low­ing func­tions to define the two other OpenSense senses:

  • SplitPriorPosterior: [scene -> (prior_info, posterior_info)]
    • This func­tion splits a scene into prior and pos­te­rior infor­ma­tion. For now, let’s not worry about the issue of where to make the cut.
  • EventInterpretation: [(prior_info, posterior_info) -> (causal_estimate, evaluation)]
    • This is the act of event inter­pre­ta­tion, which uses the lens.
  • IsStrongCausalEstimate: [causal_estimate -> boolean]
    • This mea­sures whether a causal esti­mate is of high enough qual­ity to be taken seriously.
  • GetCausalComponents: [(causal_estimate, scene) -> list(scene_piece)]
    • This finds pieces of a scene that the causal esti­mate believes to be impor­tant for prediction.
  • GetEffectComponents: [(causal_estimate, scene) -> list(scene_piece)]
    • This finds pieces of a scene that are strongly pre­dicted by the causal estimate.
  • IsOutputOpenSenseEvent: [scene_piece -> boolean]
    • This deter­mines if a scene piece is encod­ing the fact that an out­put sense is triggered.
  • IsInputOpenSenseEvent: [scene_piece -> boolean]
    • This deter­mines if a scene piece is encod­ing the fact that an input sense is triggered.

Now we first define the OpenSense-Share sense as:

define OpenSense-Share(scene): (scene -> boolean) as
  let causal_estimate, evaluation =
    EventInterpretation(SplitPriorPosterior(scene)),
  if IsStrongCausalEstimate(causal_estimate) then
    let causally_important_pieces = GetCausalComponents(scene),
    if there is some piece in causally_important_pieces where
      IsOutputOpenSenseEvent(piece)
    then true
    otherwise false
  otherwise false

This is a bit longer than the func­tional nota­tion snip­pet we had in the Rivalry domain, but the con­cept is not super com­pli­cated. First, OpenSense-Share splits a scene into prior and pos­te­rior infor­ma­tion com­po­nents, then uses event inter­pre­ta­tion to get a causal esti­mate. It checks whether the causal esti­mate is good enough to be worth using, then finds out whether the pro­duc­tion of an out­put sense is impor­tant for the causal esti­mate. The OpenSense-Share sense is trig­gered only if all these con­di­tions are satisfied.

The over­all effect is that the OpenSense-Share sense is trig­gered when events in the envi­ron­ment are well pre­dicted by a generic’s expres­sion of their out­put senses. This prob­a­bly only hap­pens when some­one else is pay­ing atten­tion to the invol­un­tary expres­sions pro­duced by the generic, and proac­tively inter­act­ing in a way that strongly depends on the out­put sense being pro­duced. For exam­ple, sup­pose a child some­times smiles, and some­times frowns. Whenever he smiles, his mother will laugh and make loud play­ful noises. When he frowns, his mother will say quiet sooth­ing words. There is a strong rela­tion­ship between the out­put senses expe­ri­enced by the child and the events in the envi­ron­ment, so the lens will be able to make a high qual­ity causal esti­mate as a cer­tifi­cate of this rela­tion­ship3. Because this causal esti­mate exists and points to the expres­sion of the out­put senses as strong pre­dic­tors, the child will expe­ri­ence the OpenSense-Share sense. In other words, the OpenSense-Share sense mea­sures whether a generic has suc­cess­fully shared the state of their open senses with some­one else.

We also can define the OpenSense-Induce sense with a sim­i­lar trick:

define OpenSense-Induce(scene): (scene -> boolean) as
  let causal_estimate, evaluation =
    EventInterpretation(SplitPriorPosterior(scene)),
  if IsStrongCausalEstimate(causal_estimate) then
    let confident_predictions = GetEffectComponents(scene),
    if there is some piece in confident_predictions where
      IsInputOpenSenseEvent(piece) = true
    then true
    otherwise false
  otherwise false

Again, OpenSense-Induce splits a scene into prior and pos­te­rior infor­ma­tion com­po­nents, then uses event inter­pre­ta­tion to get a causal esti­mate. If the causal esti­mate is good enough to be worth using, it checks whether the pro­duc­tion of an input sense was strongly pre­dicted by the causal esti­mate. If all these con­di­tions are sat­is­fied, the generic expe­ri­ences the OpenSense-Induce sense.

The over­all effect is that the OpenSense-Induce sense is trig­gered when the generic observes some­one else invol­un­tar­ily express an open sense and is con­fi­dent of the rea­son behind the expres­sion of the open sense. In other words, the OpenSense-Induce sense detects sce­nar­ios in which an open sense is directly induced in some­one else. For exam­ple, sup­pose a child sees another child pick up a spe­cific type of toy, and very clearly sees that child smile or laugh. If sim­i­lar events have hap­pened before, to the extent that the toy is clearly the cause of the smile or laugh­ter, then wit­ness­ing this event trig­gers the OpenSense-Induce sense in the observer.

Dynamics of the OpenSense Domain

We can define a very sim­ple eval­u­a­tion that directly reflects the pro­duc­tion of an OpenSense sense. In essence, we set up the eval­u­a­tion so that it’s gen­er­ated (dur­ing event inter­pre­ta­tion) pre­cisely when a scene trig­gers the OpenSense-Share / OpenSense-Induce sense. But this only hap­pens if the OpenSense senses are expe­ri­enced first­hand; what eval­u­a­tion should be cre­ated if a scene is expected to trig­ger OpenSense senses in other gener­ics instead? Using the Mirror func­tion it’s pos­si­ble to cre­ate sim­i­larly sim­ple eval­u­a­tions describ­ing scenes where these senses were not expe­ri­enced first­hand. With that method in mind we can now define sev­eral sim­ple eval­u­a­tions in the OpenSense domain:

  • The direct (a.k.a. first-per­son) OpenSense-Share eval­u­a­tion, which is gen­er­ated when a generic expe­ri­ences the OpenSense-Share sense firsthand.
  • The vic­ar­i­ous OpenSense-Share eval­u­a­tion, which is gen­er­ated when a scene involves another generic such that, when the scene is mir­rored to the other generic, the result­ing mir­ror scene trig­gers the OpenSense-Share sense. 
    • It’s pos­si­ble to make more spe­cial­ized ver­sions of the vic­ar­i­ous OpenSense-Share eval­u­a­tion by adding addi­tional require­ments. Two note­wor­thy exam­ples include the sec­ond-per­son and third-per­son OpenSense-Share evaluations.
    • The sec­ond-per­son ver­sion hap­pens when the ori­gin generic (the one who is mak­ing the eval­u­a­tions) was the one who responded to the other generic’s expressions.
    • The third-per­son ver­sion hap­pens when a third generic was the one who responded to the other generic’s expressions.
  • The direct (a.k.a. first-per­son) OpenSense-Induce eval­u­a­tion, sim­i­larly defined.
  • The vic­ar­i­ous OpenSense-Induce eval­u­a­tion, sim­i­larly defined. 
    • The sec­ond-per­son OpenSense-Induce eval­u­a­tion, which hap­pens when the actions of another generic was the cause of the ori­gin generic’s expressions.
    • The third-per­son OpenSense-Induce eval­u­a­tion, which hap­pens when the the actions of another generic was the cause of a third generic’s expressions.

All these vari­ants are related to social activ­ity. The eval­u­a­tions based on OpenSense-Share tend to guide gener­ics toward envi­ron­ments where a lot of social activ­ity hap­pens, and the eval­u­a­tions based on OpenSense-Induce tend to guide gener­ics toward fre­quently inter­act­ing with their peers and shar­ing infor­ma­tion about how they express them­selves. In the OpenSense domain we see gener­ics form­ing clus­ters anal­o­gous to what we call friend­ship groups, where gener­ics in each clus­ter enjoy watch­ing each other’s daily behav­iors, shar­ing sto­ries about how they responded to recent events, and excit­ing one another in poten­tially very spe­cific ways.

In gen­eral the first-per­son vari­ants of the OpenSense senses are more self-cen­tered; if a generic has an agent that that favors the pro­duc­tion of a first-per­son OpenSense sense, they tend to enjoy the aspects of social inter­ac­tion that ben­e­fit them­selves most directly. For exam­ple, a per­son with an agent that favors the first-per­son OpenSense-Share sense will pre­fer sce­nar­ios where he receives a lot of atten­tion from oth­ers. He will tend to clus­ter with the peo­ple who show this level of atten­tion, and ignore those who tend not to respond to his senses very directly. Note that directly grab­bing atten­tion is not sat­is­fac­tory, since the OpenSense-Share sense requires that the expres­sion of an open sense be the cause of the response, whereas proac­tively grab­bing another generic’s atten­tion would shift the cause of the response to said atten­tion-grab­bing behav­ior. In other words, the per­son from our exam­ple would pre­fer oth­ers to fig­ure out his emo­tions on their own, and will be unhappy if he is forced to directly explain how he feels.

A per­son with an agent that favors the first-per­son OpenSense-Induce eval­u­a­tion would pre­fer sce­nar­ios where other people’s emo­tions are eas­ily pre­dicted. One method of achiev­ing this would be for the per­son to con­stantly fol­low a well-under­stood friend. In doing so, the per­son would expe­ri­ence the OpenSense-Induce sense sim­ply by watch­ing the way his friend responds to daily events. In the absence of such a well-under­stood friend, there is also a more devi­ous way to max­i­mize the OpenSense-Induce eval­u­a­tion. The per­son from our exam­ple can directly tease or poke a spe­cific tar­get to watch their response. If a cer­tain action cre­ates a pre­dictable response in the tar­get (often a neg­a­tive one express­ing annoy­ance or unhap­pi­ness), the per­son in our exam­ple is moti­vated to repeat this action and con­stantly watch the same pre­dictable response, trig­ger­ing the OpenSense-Induce sense. Of course, a generic can choose to induce a pos­i­tive response instead, and can use the same trick to excite other indi­vid­u­als in their cluster.

The vic­ar­i­ous OpenSense eval­u­a­tions tend to syn­chro­nize clus­ters of gener­ics such that an event that should have only affected only one mem­ber of the group actu­ally causes a response in the entire group. This is anal­o­gous to the con­cept of cama­raderie, where a per­son enjoys being friendly to other group mem­bers or sim­ply watch­ing two friends in their group play­ing with each other, even if said per­son doesn’t directly ben­e­fit from the act of friend­ship4 or from being a mere observer. It’s impor­tant to note that the pro­duc­tion of vic­ar­i­ous OpenSense eval­u­a­tions scale qua­drat­i­cally with the num­ber of indi­vid­u­als in a clus­ter. With a large clus­ter of friends, there are more inter­ac­tions between pairs of indi­vid­u­als and more oppor­tu­ni­ties to expe­ri­ence the OpenSense eval­u­a­tions. This is espe­cially true if the gener­ics in the clus­ter favor the third-per­son eval­u­a­tions, which moti­vates them to actively set up group activ­i­ties where many pairs of indi­vid­u­als interact.

An advanced agent that favors a vic­ar­i­ous OpenSense eval­u­a­tion, espe­cially a third-per­son vari­ant, may be able to cre­ate com­plex plans that ulti­mately pull more peo­ple into their clus­ter. This may be overkill though. An agent that favors the sec­ond-per­son OpenSense eval­u­a­tions auto­mat­i­cally has a ten­dency to expand their clus­ter. This is because such an agent directly offers the atten­tion and famil­iar­ity that attract indi­vid­u­als who are moti­vated by the first-per­son OpenSense eval­u­a­tions5. A generic strongly moti­vated to pro­duce the sec­ond-hand OpenSense-Share eval­u­a­tion is anal­o­gous to an overly famil­iar per­son who goes out of his way to strike con­ver­sa­tion with strangers and bet­ter under­stand their emo­tions. And a generic strongly moti­vated to pro­duce the sec­ond-hand OpenSense-Induce eval­u­a­tion is anal­o­gous to a dra­matic per­son who calls atten­tion to their emo­tional response even to very triv­ial inter­ac­tions with other peo­ple6.

Other Phenomena in the OpenSense Domain

The most notice­able phe­nom­e­non in the OpenSense domain is the ten­dency for gener­ics to form tightly knit clus­ters whose mem­bers are moti­vated by the direct or vic­ar­i­ous OpenSense eval­u­a­tions. The dynam­ics of this domain serve as the foun­da­tion for more advanced mod­els of group behav­ior. But there are sev­eral other phe­nom­ena in the domain that could be sim­i­larly foun­da­tional for future work. Firstly, the OpenSense dynam­ics encour­age gener­ics to cre­ate and con­sume poten­tially fic­tional sto­ries. A generic con­sum­ing a story would be able to vividly recon­struct a scene from the imagery in the story, and pro­duce vic­ar­i­ous OpenSense eval­u­a­tions when watch­ing the char­ac­ters inter­act in this imag­ined scene. The author of a fic­tional story directly gen­er­ates a scene in their imag­i­na­tion, then uses imagery and char­ac­ter­i­za­tion tech­niques to help future read­ers of the story recre­ate the scene. A story need not be fic­tional – a generic pas­sion­ately describ­ing their day to oth­ers in their clus­ter is sim­ply telling a story based on fac­tual events, but nonethe­less the sto­ry­teller can use imagery or empha­sis to help other mem­bers in the clus­ter visu­al­ize the scene. The moti­va­tion to share a story with oth­ers may also stem from an OpenSense eval­u­a­tion, though it is likely more com­plex than the eval­u­a­tions dis­cussed earlier.

Secondly, gener­ics in the OpenSense domain can learn a sur­pris­ing amount of infor­ma­tion about the thought processes of other indi­vid­u­als if the SenseOut senses are strongly cor­re­lated with inter­nal struc­tures (such as the pro­duc­tion of cer­tain eval­u­a­tions or the bal­ance of power between agents). The infor­ma­tion learned sim­ply from observ­ing the open senses can accu­mu­late over time, and even­tu­ally become enough for an observer to cre­ate their own model of another generic’s behav­iors. These recur­sive mod­els will be use­ful to have when cre­at­ing more com­plex descrip­tions of generic minds.

Lastly, social infor­ma­tion can be quite valu­able in the OpenSense domain. To see this, we should first note that many indi­vid­u­als in a clus­ter of gener­ics are moti­vated to make them­selves eas­ily under­stood even if their thought processes are actu­ally quite com­pli­cated. They may try to avoid sit­u­a­tions that expose the more com­pli­cated behav­iors, mak­ing the trig­ger con­di­tions for their open senses seem sim­pler than they truly are. Therefore, the cama­raderie in a clus­ter of gener­ics can be sig­nif­i­cantly dis­rupted if leaked social infor­ma­tion reveals that one of its indi­vid­u­als behaves dif­fer­ently from the group expec­ta­tion. Social infor­ma­tion can be also used in many other ways to engi­neer changes in group struc­ture, and gener­ics may come up with con­vo­luted strate­gies that use social infor­ma­tion as bar­gain­ing chips to achieve their goals. If we devi­ate from the pure form of the OpenSense domain and give gener­ics a small amount of vol­un­tary con­trol over their open senses, then gener­ics can more eas­ily adjust group expec­ta­tions of their behav­ior, which pushes the value of social infor­ma­tion even higher. But if vol­un­tary con­trol becomes so ram­pant that the OpenSense dynam­ics become heav­ily dis­torted, then gener­ics may learn to dis­trust the open senses, caus­ing the value of social infor­ma­tion to col­lapse. In any case, a generic with good vol­un­tary con­trol over their open senses can entirely opt out of the OpenSense domain sim­ply by refus­ing to express any open senses. In doing so, they will have dif­fi­culty form­ing groups with other gener­ics but will be immune to these social infor­ma­tion-based strategies.

Footnotes

  1. In the pure form of the open sense domain, such expres­sions of emo­tion can­not be hid­den or forged. I will briefly touch on the topic of vol­un­tary con­trol near the end of the post.
  2. Performing this alter­ation would of course change the scene. If desired, a generic could pro­duce a very com­pli­cated eval­u­a­tion of a scene by try­ing many kinds of alter­ations and com­par­ing / con­trast­ing the mir­rored results.
  3. I avoid the word cor­re­la­tion, because the child has some con­trol over their actions and may be able to demon­strate an actual causal link instead of a mere cor­re­la­tion. For exam­ple, they can try com­bin­ing their smil­ing or frown­ing with var­i­ous other actions to see whether the mother’s response is related to a third vari­able. This sounds pretty advanced for a child, but a lot of this behav­ior may just be instinc­tive, who knows?
  4. If there is a sig­nif­i­cant loss from doing this then the Rivalry dynam­ics may be rel­e­vant. However, a Rivalry based expla­na­tion falls apart for lit­tle tri­fles where there is noth­ing to gain or lose.
  5. By the prin­ci­ple of Occam’s razor, it’s rea­son­able to expect sim­ple eval­u­a­tions to be more com­mon than com­plex eval­u­a­tions. The sim­plest eval­u­a­tions in the OpenSense domain are the first-per­son ones, so it should be com­mon to see gener­ics that pre­fer to directly expe­ri­ence the OpenSense senses.
  6. Note that in this case, proac­tively grab­bing atten­tion is accept­able. The other peo­ple sim­ply need to expe­ri­ence the SenseIn sense and causally link it to their behav­ior; the empha­sis or drama nei­ther adds nor detracts from this process. However, in the pure ver­sion of the OpenSense domain there is no way to forge an expres­sion of an open sense, so the strat­egy only works if the ori­gin generic was already very expressive.

Leave a Reply

Your email address will not be published. Required fields are marked *