You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Devise an ecosystem in which rules that govern trust emerge from the wisdom of crowds.
This is a multi-year issue to evolve laws of trust, prove their merit and calibrate psychologically-grounded models of trust. We assume the fundamental laws of trust can be expressed as executable code combined with data about observations, actions, and beliefs of others. Central element is the trust function. Speculative issue; REQUIREMENT: achieved the less hard goal of end-to-end reinforcement learning and self-replicating agents: #3752.
Everything is expressed as a parameter and suitable for mutation: all information storage (enabling indirect reciprocity), database technology (e.g. trustchain tamper-proofness), and strategies (tit-for-tat,win-stay,lose-shift, etc.)
everybody can discover, select, mutate, and gossip any trust function
all users obtain a certain satisfaction or utility within our ecosystem (simplistic metric: days of usage)
all users gossip about their used trust function and obtained utility
trustworthiness of gossip is calculated, creating a circular dependency and evolutionary process
periodically each user evaluates the trust function they use and perceived utility
Craft a positive reinforcement feedback loop which selects a semi-random new trust function with a bias towards superior utility
Attract scientists by making all data easily available for machine learning research
Stimulate emergent effect of scientists publishing on trust function improvements and experimental validation (e.g. trust engineering)
Ecosystem drifts towards an understanding of the fundamentals of trust
The text was updated successfully, but these errors were encountered:
Devise an ecosystem in which rules that govern trust emerge from the wisdom of crowds.
This is a multi-year issue to evolve laws of trust, prove their merit and calibrate psychologically-grounded models of trust. We assume the fundamental laws of trust can be expressed as executable code combined with data about observations, actions, and beliefs of others. Central element is the trust function. Speculative issue; REQUIREMENT: achieved the less hard goal of end-to-end reinforcement learning and self-replicating agents: #3752.
The text was updated successfully, but these errors were encountered: