Binance Square

Kenia Bobino eqtB

Trade eröffnen
Regelmäßiger Trader
2.4 Monate
309 Following
3.3K+ Follower
1.0K+ Like gegeben
8 Geteilt
Beiträge
Portfolio
·
--
Übersetzung ansehen
Exploring the vision behind @Square-Creator-bb6505974 _network and I’m impressed by how $MIRA is positioning itself at the intersection of scalable infrastructure and real Web3 utility. The focus on sustainable growth, community-driven governance, and long-term ecosystem value makes #MİRA a project worth watching closely. Excited to see how $MIRA evolves in the next phase!$MIRA
Exploring the vision behind @Mira _network and I’m impressed by how $MIRA is positioning itself at the intersection of scalable infrastructure and real Web3 utility. The focus on sustainable growth, community-driven governance, and long-term ecosystem value makes #MİRA a project worth watching closely. Excited to see how $MIRA evolves in the next phase!$MIRA
Übersetzung ansehen
The Honesty LayerMy Conversation About the Project Trying to Make AI Tell the Truth The Night MyI was talking to a lawyer friend last weeklet's call him Mikeand he told me a story that's been stuck in my head ever since He was preparing a brief, the kind of tedious document review that makes young associates question their life choices. On a whim, he asked an AI to help find precedents related to his case. The AI came back with five perfect citations. Cases with names, dates, court dockets, even summaries of the rulings. It looked like it had saved him hours of work Only one problem. Three of those cases didn't exist The AI had just made them up. Not maliciously. Not intentionally. It had just generated what sounded right based on patterns in its training data. Mike only caught it because he had a feeling and checked the sources himself. If he'd been in a hurryand lawyers are always in a hurryhe might have filed a brief citing fake cases. That's the kind of thing that gets you sanctioned. Or fired The thing that bothers me Mike saidis that it was so confident It didn't say maybe these existIt presented them like facts. And they were just nothing This is the problem that's been keeping me up at night lately. Not just with legal research, but with everything. We're handing more and more of our thinking to systems that don't actually know anything. They're really good at sounding like they know things. But knowing? That's different What I Learned From a Guy in Lisbon I tracked down one of the people working on this problemlet's call him Alex, though that's not his real nameand we ended up talking for three hours about why AI lies and what we can do about it Alex told me a story about a bar in Lisbon during some blockchain conference. He was arguing with a friend about whether AI would ever be trustworthy enough to handle real money. His friend thought the models would keep getting better until they stopped making things up. Alex thought the problem was deeper I lst that argument," Alex said. "But I was also right His point was that large language models aren't designed to be truthful. They're designed to be plausible. They're pattern-matching engines that have read basically the entire internet and learned what words tend to follow other words. When you ask them a question, they're not checking a database of facts. They're generating the most statistically likely sequence of tokens This is why they hallucinate. It's not a bug you can fix with more training data. It's a feature of how they work "But here's the thing," Alex told me. "We don't need to fix the models. We just need to catch them when they're wrong That's the insight that became the project he's working on now. Not a better AI. A layer on top of AI that checks its work How Do You Actually Check If an AI Is Lying I asked Alex the obvious question: how do you check something when the thing doing the checking might also lie His answer was surprisingly simple. You don't trust one checker. You trust a crowd Here's how it works in practice, as he explained it You ask an AI something. Maybe it's "What's the best time to plant tomatoes in zone 7?" The AI gives you an answer about frost dates and soil temperatures. That answer gets broken down into tiny piecesindividual claims that can be checked separately Tomatoes shouldn't be planted until after the last frost That's one claim. "The average last frost date in zone 7 is mid-AprilThat's another. Soil temperature should be at least 60 degreesThat's a third Each of these tiny claims gets sent to a bunch of different verifiers. But here's the twist: the verifiers aren't humans. They're other AI models. Different ones. Some are small and specialized. Some are big generalists. Some are fine-tuned on gardening data. Some aren't They all look at the same claim independently and vote on whether it's true If most of them agree, the claim passes. If they disagree, it gets flagged. If a verifier is consistently wrongvoting yes on false claims or no on true onesit gets penalized. If it's consistently right, it gets rewarded The system doesn't care what the original AI said. It cares what the crowd of other AIs thinks It's like having a panel of experts review your work," Alex said. "Except the experts are machines, and they're all slightly different, and they have financial incentives to be honest The Part About Money That Actually Matters This is where my eyes usually glaze over, because crypto people love talking about tokenomics and incentives in ways that make my brain hurt. But Alex explained it in a way that actually made sense The verifiers have to put up money to participate he saidReal money. Tokens that have value. If they vote wrongif they say a false claim is true or a true claim is falsethey lose some of that money This changes everything. Suddenly, the people running these verifiers have skin in the game. They're going to pick the best AI models they can find. They're going to doublecheck things when they're uncertain. They're going to be careful, because being wrong costs them The opposite is also true. Verifiers who are consistently right earn rewards. They get paid for being accurate "So over time Alex said, "the network naturally ends up with the verifiers that are best at telling truth from falsehood. The bad ones lose their money and drop out. The good ones make money and stick around It's not about building a perfect truth-detecting machine. It's about creating a market where honesty is more profitable than dishonesty The Privacy Thing I Actually Care About I asked Alex about privacy, because this is the part that usually kills these kinds of systems for me. If you're checking something sensitivemedical information, financial data, personal stuffyou don't want it broadcast to a thousand random verifiers He smiled. Yeah, we thought about that The system fragments everything. Each verifier sees only a tiny piece of the puzzle. One verifier might see "tomatoes shouldn't be planted until after the last frost" without any context about who asked or why. Another verifier sees "average last frost date in zone 7 is mid-April." They can't piece together the full picture because they don't have all the pieces The verification happens inside these secure enclaveshardware black boxes where even the person running the computer can't see what's happening inside. The AI does its checking, the result comes out, but the input data stays hidden At the end, the system produces a cryptographic certificate that says "this claim has been verified" without revealing any of the underlying information that went into the verification It's like having a doctor review your medical records without ever seeing your name or your faceAlex saidThey can tell you whether the diagnosis makes sense, but they couldn't identify you if they tried The Unexpected Thing About Bias We talked about bias, which is one of those topics that makes everyone tense. Alex admitted that the network didn't set out to solve bias. It just sort of happened "Think about ithe said.You've got verifiers all over the world. Different countries, different cultures, different political leanings. They all have to vote on whether claims are true. And they all face the same penalty for being wrong If a claim is politically loadedsay, something about an election or a historical eventverifiers can't just vote based on their personal beliefs. They have to vote based on evidence, because if they vote wrong, they lose money Over time, verifiers who let bias override accuracy get weeded out. Not because anyone is policing them, but because they keep losing their stakes "The network doesn't care about your politics, Alex said. "It cares about whether you're right. And that turns out to be a pretty powerful force for neutrality The People Actually Using This Thing I asked Alex who's actually using this system, because infrastructure projects are notorious for building beautiful things that nobody needs The first real users, he said, are in finance. There are these autonomous trading agents nowAI programs that make trades based on market analysis. A few of them traded on bad information and lost real money. Now some protocols require that any AI agent executing trades has its analysis verified first "The cryptographic certificate becomes a kind of insurance Alex said. If the analysis was verified and still wrong, the protocol has recourse against the verifiers who approved it The legal thing came up again. Law firms are starting to use this. After a few high-profile embarrassments where lawyers filed briefs with fake cases, firms are getting nervous. Some now run all AI-generated legal research through verification. If a case citation is fake, the system flags it Medical applications are early but promising. A group of diagnostic AI companies is experimenting with using the network to verify that their recommendations align with medical literature before those recommendations reach doctors It's not replacing doctors,Alex said. t's adding a step that says 'three independent AI models agree this diagnosis is consistent with current evidence.' That's something a doctor can actually use The Skeptics and Their Concerns I pushed Alex on the problems. He didn't dodge The biggest concern is that consensus among AI models doesn't guarantee truth. If all the models share similar training data or similar blind spots, they could all be wrong together. The network tries to enforce diversity, but it's hard to know whether that diversity is real Speed is another issue. Verification takes time. For applications that need instant responseslike trading or customer supportwaiting for consensus might not work. The network has different tiers of verification for different needs, but it adds complexity Then there's the money question. Verifiers need to be paid, which means the network needs constant demand. If applications don't materialize, the economics break down Early numbers look goodthousands of verifiersgrowing query volumebut infrastructure projects have a long history of building things nobody uses And the regulatory stuff is a mess. If verified AI gives bad advice that causes harm, who's liable? The user? The AI developer? The verifiers? The network itself? Nobody knows. That'll get sorted by courts and regulators, not engineers What Keeps Him Up at Night I asked Alex what worries him most about all of this. He was quiet for a minute "The thing that bothers me he finally said, "is that we're building a system where truth is determined by economic consensus. And I don't know if that's actually truth or just something that looks like it If you have enough money, can you game the system? Can you coordinate a bunch of verifiers to approve false claims? The network has protectionsrandom assignment economic penalties, diversity requirementsbut no system is ungameable There's no perfect answer he said. "But the alternative is what we have now, which is basically trusting whoever speaks most confidently. And that's not working great A Story From Late Night Testing Before we wrapped up, Alex told me a story about a late night before the system launched One of the engineers had set up an adversarial test. He created a malicious AI designed to generate plausible-sounding but completely false information. Financial data, mostly. Things that could cause real damage if someone acted on them He fed question after question into the system, watching as the malicious AI spun elaborate fictions Then he watched as the verifiers caught every single one The malicious AI would generate a false claim. The verifiers, running different models with different training, would flag it. The consensus would reject it. The certificate would show verification failed The engineer who built the malicious modelthe one trying to break the systemsat there looking at the results "It's not that the AI stopped lying,he said. It's that lying stopped working Alex told me that's the moment he knew they might have something Where This Goes The network is live now. People are staking tokens, queries are flowing, applications are being built. The early signs are promising, but the long-term question is still open Will verified AI become normal, or will we keep trusting unverified models because they're faster and cheaper? Will companies pay for verification, or will they accept the risk of hallucinations? Will the economic incentives hold up over time, or will someone find a way to break them I don't know the answers. Neither does Alex But I keep thinking about my lawyer friend Mike and those fake court cases. He would have paid something for a system that could have caught them before he filed that brief. He would have paid for verification @Square-Creator-bb6505974 #Mira $MIRA {future}(MIRAUSDT)

The Honesty LayerMy Conversation About the Project Trying to Make AI Tell the Truth The Night My

I was talking to a lawyer friend last weeklet's call him Mikeand he told me a story that's been stuck in my head ever since
He was preparing a brief, the kind of tedious document review that makes young associates question their life choices. On a whim, he asked an AI to help find precedents related to his case. The AI came back with five perfect citations. Cases with names, dates, court dockets, even summaries of the rulings. It looked like it had saved him hours of work
Only one problem. Three of those cases didn't exist
The AI had just made them up. Not maliciously. Not intentionally. It had just generated what sounded right based on patterns in its training data. Mike only caught it because he had a feeling and checked the sources himself. If he'd been in a hurryand lawyers are always in a hurryhe might have filed a brief citing fake cases. That's the kind of thing that gets you sanctioned. Or fired
The thing that bothers me Mike saidis that it was so confident It didn't say maybe these existIt presented them like facts. And they were just nothing
This is the problem that's been keeping me up at night lately. Not just with legal research, but with everything. We're handing more and more of our thinking to systems that don't actually know anything. They're really good at sounding like they know things. But knowing? That's different
What I Learned From a Guy in Lisbon
I tracked down one of the people working on this problemlet's call him Alex, though that's not his real nameand we ended up talking for three hours about why AI lies and what we can do about it
Alex told me a story about a bar in Lisbon during some blockchain conference. He was arguing with a friend about whether AI would ever be trustworthy enough to handle real money. His friend thought the models would keep getting better until they stopped making things up. Alex thought the problem was deeper
I lst that argument," Alex said. "But I was also right
His point was that large language models aren't designed to be truthful. They're designed to be plausible. They're pattern-matching engines that have read basically the entire internet and learned what words tend to follow other words. When you ask them a question, they're not checking a database of facts. They're generating the most statistically likely sequence of tokens
This is why they hallucinate. It's not a bug you can fix with more training data. It's a feature of how they work
"But here's the thing," Alex told me. "We don't need to fix the models. We just need to catch them when they're wrong
That's the insight that became the project he's working on now. Not a better AI. A layer on top of AI that checks its work
How Do You Actually Check If an AI Is Lying
I asked Alex the obvious question: how do you check something when the thing doing the checking might also lie
His answer was surprisingly simple. You don't trust one checker. You trust a crowd
Here's how it works in practice, as he explained it
You ask an AI something. Maybe it's "What's the best time to plant tomatoes in zone 7?" The AI gives you an answer about frost dates and soil temperatures. That answer gets broken down into tiny piecesindividual claims that can be checked separately
Tomatoes shouldn't be planted until after the last frost That's one claim.
"The average last frost date in zone 7 is mid-AprilThat's another.
Soil temperature should be at least 60 degreesThat's a third
Each of these tiny claims gets sent to a bunch of different verifiers. But here's the twist: the verifiers aren't humans. They're other AI models. Different ones. Some are small and specialized. Some are big generalists. Some are fine-tuned on gardening data. Some aren't
They all look at the same claim independently and vote on whether it's true
If most of them agree, the claim passes. If they disagree, it gets flagged. If a verifier is consistently wrongvoting yes on false claims or no on true onesit gets penalized. If it's consistently right, it gets rewarded
The system doesn't care what the original AI said. It cares what the crowd of other AIs thinks
It's like having a panel of experts review your work," Alex said. "Except the experts are machines, and they're all slightly different, and they have financial incentives to be honest
The Part About Money That Actually Matters
This is where my eyes usually glaze over, because crypto people love talking about tokenomics and incentives in ways that make my brain hurt. But Alex explained it in a way that actually made sense
The verifiers have to put up money to participate he saidReal money. Tokens that have value. If they vote wrongif they say a false claim is true or a true claim is falsethey lose some of that money
This changes everything. Suddenly, the people running these verifiers have skin in the game. They're going to pick the best AI models they can find. They're going to doublecheck things when they're uncertain. They're going to be careful, because being wrong costs them
The opposite is also true. Verifiers who are consistently right earn rewards. They get paid for being accurate
"So over time
Alex said, "the network naturally ends up with the verifiers that are best at telling truth from falsehood. The bad ones lose their money and drop out. The good ones make money and stick around
It's not about building a perfect truth-detecting machine. It's about creating a market where honesty is more profitable than dishonesty
The Privacy Thing I Actually Care About
I asked Alex about privacy, because this is the part that usually kills these kinds of systems for me. If you're checking something sensitivemedical information, financial data, personal stuffyou don't want it broadcast to a thousand random verifiers
He smiled.
Yeah, we thought about that
The system fragments everything. Each verifier sees only a tiny piece of the puzzle. One verifier might see "tomatoes shouldn't be planted until after the last frost" without any context about who asked or why. Another verifier sees "average last frost date in zone 7 is mid-April." They can't piece together the full picture because they don't have all the pieces
The verification happens inside these secure enclaveshardware black boxes where even the person running the computer can't see what's happening inside. The AI does its checking, the result comes out, but the input data stays hidden
At the end, the system produces a cryptographic certificate that says "this claim has been verified" without revealing any of the underlying information that went into the verification
It's like having a doctor review your medical records without ever seeing your name or your faceAlex saidThey can tell you whether the diagnosis makes sense, but they couldn't identify you if they tried
The Unexpected Thing About Bias
We talked about bias, which is one of those topics that makes everyone tense. Alex admitted that the network didn't set out to solve bias. It just sort of happened
"Think about ithe said.You've got verifiers all over the world. Different countries, different cultures, different political leanings. They all have to vote on whether claims are true. And they all face the same penalty for being wrong
If a claim is politically loadedsay, something about an election or a historical eventverifiers can't just vote based on their personal beliefs. They have to vote based on evidence, because if they vote wrong, they lose money
Over time, verifiers who let bias override accuracy get weeded out. Not because anyone is policing them, but because they keep losing their stakes
"The network doesn't care about your politics, Alex said. "It cares about whether you're right. And that turns out to be a pretty powerful force for neutrality
The People Actually Using This Thing
I asked Alex who's actually using this system, because infrastructure projects are notorious for building beautiful things that nobody needs
The first real users, he said, are in finance. There are these autonomous trading agents nowAI programs that make trades based on market analysis. A few of them traded on bad information and lost real money. Now some protocols require that any AI agent executing trades has its analysis verified first
"The cryptographic certificate becomes a kind of insurance Alex said. If the analysis was verified and still wrong, the protocol has recourse against the verifiers who approved it
The legal thing came up again. Law firms are starting to use this. After a few high-profile embarrassments where lawyers filed briefs with fake cases, firms are getting nervous. Some now run all AI-generated legal research through verification. If a case citation is fake, the system flags it
Medical applications are early but promising. A group of diagnostic AI companies is experimenting with using the network to verify that their recommendations align with medical literature before those recommendations reach doctors
It's not replacing doctors,Alex said. t's adding a step that says 'three independent AI models agree this diagnosis is consistent with current evidence.' That's something a doctor can actually use
The Skeptics and Their Concerns
I pushed Alex on the problems. He didn't dodge
The biggest concern is that consensus among AI models doesn't guarantee truth. If all the models share similar training data or similar blind spots, they could all be wrong together. The network tries to enforce diversity, but it's hard to know whether that diversity is real
Speed is another issue. Verification takes time. For applications that need instant responseslike trading or customer supportwaiting for consensus might not work. The network has different tiers of verification for different needs, but it adds complexity
Then there's the money question. Verifiers need to be paid, which means the network needs constant demand. If applications don't materialize, the economics break down Early numbers look goodthousands of verifiersgrowing query volumebut infrastructure projects have a long history of building things nobody uses
And the regulatory stuff is a mess. If verified AI gives bad advice that causes harm, who's liable? The user? The AI developer? The verifiers? The network itself? Nobody knows. That'll get sorted by courts and regulators, not engineers
What Keeps Him Up at Night
I asked Alex what worries him most about all of this. He was quiet for a minute
"The thing that bothers me he finally said, "is that we're building a system where truth is determined by economic consensus. And I don't know if that's actually truth or just something that looks like it
If you have enough money, can you game the system? Can you coordinate a bunch of verifiers to approve false claims? The network has protectionsrandom assignment economic penalties, diversity requirementsbut no system is ungameable
There's no perfect answer he said. "But the alternative is what we have now, which is basically trusting whoever speaks most confidently. And that's not working great
A Story From Late Night Testing
Before we wrapped up, Alex told me a story about a late night before the system launched
One of the engineers had set up an adversarial test. He created a malicious AI designed to generate plausible-sounding but completely false information. Financial data, mostly. Things that could cause real damage if someone acted on them
He fed question after question into the system, watching as the malicious AI spun elaborate fictions
Then he watched as the verifiers caught every single one
The malicious AI would generate a false claim. The verifiers, running different models with different training, would flag it. The consensus would reject it. The certificate would show verification failed
The engineer who built the malicious modelthe one trying to break the systemsat there looking at the results
"It's not that the AI stopped lying,he said. It's that lying stopped working
Alex told me that's the moment he knew they might have something
Where This Goes
The network is live now. People are staking tokens, queries are flowing, applications are being built. The early signs are promising, but the long-term question is still open
Will verified AI become normal, or will we keep trusting unverified models because they're faster and cheaper? Will companies pay for verification, or will they accept the risk of hallucinations? Will the economic incentives hold up over time, or will someone find a way to break them
I don't know the answers. Neither does Alex
But I keep thinking about my lawyer friend Mike and those fake court cases. He would have paid something for a system that could have caught them before he filed that brief. He would have paid for verification
@Mira #Mira $MIRA
·
--
Bullisch
Übersetzung ansehen
$ROBO The future of robotics isn’t just about hardware — it’s about governance. @FabricFND Foundation is building a decentralized robot economy where $ROBO powers coordination, incentives, and accountability. Who controls the machines matters. With #ROBO , the community shapes protocol rules, emissions, and real-world impact. {future}(ROBOUSDT)
$ROBO

The future of robotics isn’t just about hardware — it’s about governance. @Fabric Foundation Foundation is building a decentralized robot economy where $ROBO powers coordination, incentives, and accountability. Who controls the machines matters. With #ROBO , the community shapes protocol rules, emissions, and real-world impact.
Übersetzung ansehen
THE POLITICS OF ROBOT ECONOMY: FABRIC PROTOCOL IntroductionHaving examined the social, economic and technical aspects of the Fabric Protocol, I concentrated on the way it is governed. Any system comprising of AI, robots, and blockchain will introduce new power relationships. Fabric claims to be decentralized based on non-profit foundation and community rules. Who actually operates the network though? What are the sources of power of the token rules? And what laws and morals will be required in a robot-trading and -deciding world? This paper examines the political and governance aspect of robot economy. I am not reciting marketing slogans I want to know what drives and what are the structures that determine who wins and who loses. Dual Structure Foundation vs. Protocol Ltd The protocol is maintained by the non-profit Fabric Foundation, and the $ROBO token is issued by Fabric Protocol Ltd. which is located in the British Virgin Islands. An institutional report indicates that the project has raised $20M in a Series A round led by Pantera Capital, Coinbase Ventures and other large investors. The report states that Fabric has an objective of developing open robotics hardware and software, which is aimed at platform-wise compatibility between systems and decentralized identity. The non-profit is supposed to allow anybody to be part and prevent a single company to take everything in its control. Nevertheless, the existence of a commercial enterprise that sells tokens and organizes the project may be a source of conflicts. Who serves the interests of the for-profit in accordance with the community? And when there is a profit made, whither does it go? This two-part arrangement resembles other crypto organizations: Ethereum Foundation funds its research and upgrades, and commercial products are made by such companies as ConsenSys. The case of fabric is different since its product is real world robots that operate out in the real world. There are actual risks that are associated with real robots, which are not experienced with plain software. When a robot injures a person, to whom will the non-profit, the for-profit, or the token holders would be sued? The structure must provide answers to the governance and legal questions. Tokenomics and Power The division of power can be seen in the division of ROBO. The report states that 29.7 percent of tokens are sent to the community, but 44.3 percent of the tokens belong to the investors and the team (24.3 percent investors and 20 percent team). The supply is concentrated in the vesting schedules, which comprise 87.25 percent of the supply. Network rules, fees and upgrades may be voted by token holders, and thus most decisions can be influenced by early investors and members of the core team. The risk is not just theory. According to researchers at Brookings, most decentralized platforms have large actors that have the majority of power, which is a violation of decentralization. They note that under token based governance, big holders have the power to make decisions on changing the protocols and distributions of resources. Even systems based on proof-of-stake are prone to concentration; 30-plus percent of staked Ether are under the control of Lido. This might happen in the robot economy when there are too powerful token holders or staking pools. Motivation is also influenced in tokenomics. When the rate of emission (new tokens) is high, the value of tokens can be reduced and individuals might not have long term. When there is low emission, the community might lack sufficient funds to expand. Fabric states that it will adjust the rate of emissions depending upon the congestion of the network and the quality of donations, though they are only vaguely described. In case the emission rule becomes politicized, big holders might lobby to have rules which are more favorable to them. The rules should be measurable and transparent to maintain a healthy amount of tokens. Otherwise, money policy will lead to brawls. Risk of Re-Centralization and Imperative of Policy. The notion of decentralization is not a yes/no fact. According to Brookings researchers, numerous blockchain platforms have been re-centralized and large players emerge and make them less open. They state that decentralization must be maintained by fair governance and information disclosure on the possession of tokens and restriction of influence. The safeguards that may be implemented are restricting the right to vote, quadratic voting, and hybrid key management that may prevent one party to decide everything. In the absence of such regulations, even a non-profit making foundation might be hijacked by the insiders or the influential groups. In robotics, there is a greater risk since safety is an issue. In case there are a small number of validators who determine the method of task checking and payment, they might block tasks, pay more, or modify the behavior of the robots. Poor management of consensus has the potential to squander resources and allow bad actors to re-direct robots or embezzle funds. The protocol must be able to combine transparency with strategies to prevent centralization and maintain accountability. Such tools as decentralized identity lists, community multisignature wallets, and slashing punishments on bad actions can assist, but they are difficult to construct. Moreover, robotics scale implies that the insignificant power may be used to impact the real world. As an example, when a big token owner has power to coordinate the time of delivery robots in a city, they may give preference to their services or deny their competitors. Validators may then be perceived as a critical infrastructure by regulators and subject to their control. A combination of on-chain regulations and government legislation will be an important component of the robot economy. Fragments of Regulation and the Problems of Cross-Jurisdiction. Both robots and blockchain have to deal with dynamic regulations. The report indicates that regulation varies significantly across countries and therefore a protocol developed to suit U.S. regulations might struggle in Europe or the Asian region. According to the automation article, overlaps in regulations and absence of standardization prevent cross-platform work and complicate the process of adhering to data protection regulations. When there is no universal agreement on the laws, the companies can only start in amicable locations and this restricts the international coverage. Robots will manage personal information of our houses, workplaces and health conditions as they become increasingly prevalent and interconnected. According to the GCR report, decentralized systems need to reduce the amount of risk associated with excessive concentration and allow users to control access to their data. The automation article further explains that although blockchain indicates the identity of the person who does what, it may also reveal confidential details of how it operates unintentionally. The necessity to achieve a balance between transparency and confidentiality requires privacy saving techniques. Firms with considerable investment in robot AI are also concerned about the loss of intellectual property in case the algorithm and the data are registered on an immutable registry. Permissioned chains, with zero-knowledge proofs and secure enclaves can be helpful but also increase the difficulty of connecting systems as well as rendering them genuinely decentralized. The politics of data is already red-hot. In case robots take audio-video records in the open space and upload such information to the chain, individuals may not like to be under constant monitoring. There are areas that consent must be made before a recording; other areas are subject to facial recognition. The design of fabric should be within these laws. It should also make decisions on ownership of the data: robot owner, filmed people or the community? Data markets might be exploitative, as the scandals of social media have been without definitive regulations. Concerns on intellectual property also emerge. Robotics companies would not prefer recording sensor data in a public registry since rivals can reverse engineer their algorithms. They can desire encryption or selective disclosure. So Fabric might be a mixture of public and private data network. The government should find the middle ground in this, being fair and keeping proprietary tech protected. Machine Ethics and Accountability Ethics and responsibility arise as a matter of politics when robots are left to do their own things. Should robots be accorded legal status? Who ought to be the wrongdoer in the event of a robot acting wrongly? Fabric Protocol assigns a verifiable ID to every robot and logs the activities on the chain. This allows us to audit what has transpired but it does not resolve the question of accountability. Having no evident guidelines, manufacturers can pass on the responsibility to protocol, and the operator can blame the manufacturer. Shared governance should establish a sense of responsibility and ensure that it provides incentives that encourage safe actions. This could be in the form of staking, where the owners of robots pledged their bonds upon bad manner or insurance pools funded by network charges. Ethics go beyond accidents. It is possible to use robots in the work which attracts some moral issues such as surveillance, police work or military work. Community governance will not prevent harmful uses when the token holders are primarily interested in profit. Unregulated use may be required to limit or prohibit certain uses. What is good is that fabric is open enough to catalyze good innovation: It can also be used to do bad things more readily. This is comparable to open-source software controversy: it provides lots of power to developers but also allows bad actors to use it. One more issue that is ethically problematic is algorithmic bias. When the workload is chosen by token rewards, there is a possibility that the robot avoids low paying yet socially beneficial work, such as delivering medicine to the poor areas. Social values must be incorporated to task-assignment algorithms by governance. Perhaps some portion of rewards ought to compensate the unprofitable but necessary services. Such decisions are not technical. Long-term effects: Worker and Machine Rights. The more autonomy and economic power robots have, the more they may appear to be more than mere tools. There is a debate on whether advanced AI has moral consideration by philosophers and legal experts. Should robots be entitled to rights or representation in case they can make money and conclude contracts? And what will the human workers have to be when robots take over the economy? In the absence of a proactive policy, the transition may increase inequality and lead to unrest. Among them is a universal robot dividend or basic income as was discussed earlier. The other one is to keep humans in control of major decisions and to maintain some jobs (e.g. care giving) to humans only. The work, citizenship and rights of the society might need to be reconsidered in the long term when machines are part and not merely a property. We can refer to other models of governance. To have a balance between expertise and inclusion, open-source software communities combine meritocracy and committees. Cooperative businesses avoid the concentration by the use of one person one vote. Quadratic voting and token-weight caps are employed as blockchain network limits to big holders. Fabric might attempt similar, such as providing local communities with veto power over the use of robots or voting power proportional to the contribution made rather than to the number of tokens one has. However, these concepts require an emphasis on community health in the long term, rather than on short-term returns to investors. Other Protocol Comparisons and Community Comparisons. In the quest to learn more about the governance decisions of Fabric, it is useful to consider other systems. The rules of Bitcoin are extremely simple: no formal voting or changes and only when the majority of miners and nodes switch the software, changes will occur.  Ethernet allowed individuals to make suggestions on the chain and organize amongst clients, although this is still dependent on people making agreements off-chain.  Fabric engages in token voting and a non-profit making foundation.  This can be compared to contemporary DAOs, except it has a company that runs it as well. In contrast to Bitcoin, the tokens of Fabric provide actual economic privileges.  The foundation is more proactive in the development as opposed to Ethereum.  The mix has the ability to make decisions appear more central and yet, seem to be decentral. It is instructive to compare it to open source communities such as the Linux kernel. Linux is maintained by few experienced professionals who select and analyze code.  The companies contribute money and computers, however, they do not determine what changes are to be included into the kernel.  People gain recognition through reputation, which is not by way of a token.  This system is in support of billions of infrastructure.  On the negative aspect, free-software projects may find it difficult to compensate individuals since they are dependent on volunteers.  The token of Fabric is an attempt to compensate the contributors, though, it attracts speculators.  It aims at maintaining open-source excitement and achieving a regular corporate investment without reducing a few individuals to control the discussion. Competition and Geopolitics. Outside the internal regulations, there are international politics of the robot economy.  Countries use robotics and AI as a strategic resource.  China, United States, EU, Japan are putting a lot of money in robot research and manufacture.  The global account and coin of fabric might emerge as an arena of struggle of power.  Governments may either prod or pull towards the use or the use of Fabric depending on its suitability with their objectives.  The protocol can be copied by some countries to retain control.  Foreign robots may have their path to the market blocked by others.  Companies such as Amazon and Tesla are developing their robots.  The open design of fabric has the potential of putting the giants on its toes, yet they can take advantage of their scale to bend the rules or create competing networks. Standards could be coordinated with the assistance of international organisations.  International Organization of Standardization (ISO) is already authoring safety regulations to industrial robots.  The International Telecommunication Union (ITU) studies ethics of AI.  These organizations may establish the regulations governing the robot economy and exchange of data.  Unless they collaborate, it could be possible that there would be a lot of incompatible systems that decreases the growth of each other. Democratic Governance and Policy Recommendations. In order to minimize risks, I would propose the following few policies: - Distribution of tokens must be different.  Quadratic voting method or stake limits or ageing voting methods can be used to ensure that a few individuals do not command the majority of the tokens.  Invest a large portion of tokens in social research and social program funds, financial institutions. - Hybrid governance ought to combine token voting with councils with workers, local groups and regulators.  This assists in ensuring that decisions are made in a way that it takes into account everybody and not only money holders. - There should be a demand of transparency.  Reporting on the ownership of tokens, the number of validators, and their decision-making process should be reported regularly.  The open information prevents secluded transactions and fosters credibility. - Law systems must be transparent.  Collaborate with legislators to establish the parties responsible, ownership of data, and robots taxes and rights.  Certification programs will be able to ensure that the robots comply with the safety and ethical standards before they become part of Fabric. - Privacy should be designed in.  Privacy usage like zero-knowledge proofs and store data locally when being shown to the user.  Provide means of deleting or anonymising data where requested by the law. These recommendations are not an exhaustive list, but they demonstrate a need to have technology design and political design shift in a similar direction. The success of Fabric will not be possible without it consulting legal experts, ethicists, worker groups, and lawmakers. Conclusion The regulation of the robot economy is not only a technical fact; it does determine whether Fabric will become a power sharing endeavor or support the existing structures. Majority tokens, in-group transactions, and watered-down law are actual threats, which require a cautious response in policy. In order to ensure that robots become beneficial to society, we need to develop equitable regulations, coordinate with others around the world, safeguard privacy and intellectual rights and establish social safety nets. The two-sidedness of fabric, where it is a non-profit organization and a token-issuing company, requires a clean sheet to avoid conflict. It must also co-operate with national and international institutions to address transnational regulations. Unless we address these political and ethical concerns, the robot economy can increase inequality but in the guise of being open. When we get it right, we will be able to build a future in which people and machines are success and influence makers. Politics of robots will rely on both our common choices and code. $ROBO #ROBO @FabricFND

THE POLITICS OF ROBOT ECONOMY: FABRIC PROTOCOL Introduction

Having examined the social, economic and technical aspects of the Fabric Protocol, I concentrated on the way it is governed. Any system comprising of AI, robots, and blockchain will introduce new power relationships. Fabric claims to be decentralized based on non-profit foundation and community rules. Who actually operates the network though? What are the sources of power of the token rules? And what laws and morals will be required in a robot-trading and -deciding world? This paper examines the political and governance aspect of robot economy. I am not reciting marketing slogans I want to know what drives and what are the structures that determine who wins and who loses.
Dual Structure Foundation vs. Protocol Ltd
The protocol is maintained by the non-profit Fabric Foundation, and the $ROBO token is issued by Fabric Protocol Ltd. which is located in the British Virgin Islands. An institutional report indicates that the project has raised $20M in a Series A round led by Pantera Capital, Coinbase Ventures and other large investors. The report states that Fabric has an objective of developing open robotics hardware and software, which is aimed at platform-wise compatibility between systems and decentralized identity. The non-profit is supposed to allow anybody to be part and prevent a single company to take everything in its control. Nevertheless, the existence of a commercial enterprise that sells tokens and organizes the project may be a source of conflicts. Who serves the interests of the for-profit in accordance with the community? And when there is a profit made, whither does it go?
This two-part arrangement resembles other crypto organizations: Ethereum Foundation funds its research and upgrades, and commercial products are made by such companies as ConsenSys. The case of fabric is different since its product is real world robots that operate out in the real world. There are actual risks that are associated with real robots, which are not experienced with plain software. When a robot injures a person, to whom will the non-profit, the for-profit, or the token holders would be sued? The structure must provide answers to the governance and legal questions.
Tokenomics and Power
The division of power can be seen in the division of ROBO. The report states that 29.7 percent of tokens are sent to the community, but 44.3 percent of the tokens belong to the investors and the team (24.3 percent investors and 20 percent team). The supply is concentrated in the vesting schedules, which comprise 87.25 percent of the supply. Network rules, fees and upgrades may be voted by token holders, and thus most decisions can be influenced by early investors and members of the core team.
The risk is not just theory. According to researchers at Brookings, most decentralized platforms have large actors that have the majority of power, which is a violation of decentralization. They note that under token based governance, big holders have the power to make decisions on changing the protocols and distributions of resources. Even systems based on proof-of-stake are prone to concentration; 30-plus percent of staked Ether are under the control of Lido. This might happen in the robot economy when there are too powerful token holders or staking pools.
Motivation is also influenced in tokenomics. When the rate of emission (new tokens) is high, the value of tokens can be reduced and individuals might not have long term. When there is low emission, the community might lack sufficient funds to expand. Fabric states that it will adjust the rate of emissions depending upon the congestion of the network and the quality of donations, though they are only vaguely described. In case the emission rule becomes politicized, big holders might lobby to have rules which are more favorable to them. The rules should be measurable and transparent to maintain a healthy amount of tokens. Otherwise, money policy will lead to brawls.
Risk of Re-Centralization and Imperative of Policy.
The notion of decentralization is not a yes/no fact. According to Brookings researchers, numerous blockchain platforms have been re-centralized and large players emerge and make them less open. They state that decentralization must be maintained by fair governance and information disclosure on the possession of tokens and restriction of influence. The safeguards that may be implemented are restricting the right to vote, quadratic voting, and hybrid key management that may prevent one party to decide everything. In the absence of such regulations, even a non-profit making foundation might be hijacked by the insiders or the influential groups.
In robotics, there is a greater risk since safety is an issue. In case there are a small number of validators who determine the method of task checking and payment, they might block tasks, pay more, or modify the behavior of the robots. Poor management of consensus has the potential to squander resources and allow bad actors to re-direct robots or embezzle funds. The protocol must be able to combine transparency with strategies to prevent centralization and maintain accountability. Such tools as decentralized identity lists, community multisignature wallets, and slashing punishments on bad actions can assist, but they are difficult to construct.
Moreover, robotics scale implies that the insignificant power may be used to impact the real world. As an example, when a big token owner has power to coordinate the time of delivery robots in a city, they may give preference to their services or deny their competitors. Validators may then be perceived as a critical infrastructure by regulators and subject to their control. A combination of on-chain regulations and government legislation will be an important component of the robot economy.
Fragments of Regulation and the Problems of Cross-Jurisdiction.
Both robots and blockchain have to deal with dynamic regulations. The report indicates that regulation varies significantly across countries and therefore a protocol developed to suit U.S. regulations might struggle in Europe or the Asian region. According to the automation article, overlaps in regulations and absence of standardization prevent cross-platform work and complicate the process of adhering to data protection regulations.
When there is no universal agreement on the laws, the companies can only start in amicable locations and this restricts the international coverage.
Robots will manage personal information of our houses, workplaces and health conditions as they become increasingly prevalent and interconnected. According to the GCR report, decentralized systems need to reduce the amount of risk associated with excessive concentration and allow users to control access to their data. The automation article further explains that although blockchain indicates the identity of the person who does what, it may also reveal confidential details of how it operates unintentionally. The necessity to achieve a balance between transparency and confidentiality requires privacy saving techniques. Firms with considerable investment in robot AI are also concerned about the loss of intellectual property in case the algorithm and the data are registered on an immutable registry. Permissioned chains, with zero-knowledge proofs and secure enclaves can be helpful but also increase the difficulty of connecting systems as well as rendering them genuinely decentralized.
The politics of data is already red-hot. In case robots take audio-video records in the open space and upload such information to the chain, individuals may not like to be under constant monitoring. There are areas that consent must be made before a recording; other areas are subject to facial recognition. The design of fabric should be within these laws. It should also make decisions on ownership of the data: robot owner, filmed people or the community? Data markets might be exploitative, as the scandals of social media have been without definitive regulations.
Concerns on intellectual property also emerge. Robotics companies would not prefer recording sensor data in a public registry since rivals can reverse engineer their algorithms. They can desire encryption or selective disclosure. So Fabric might be a mixture of public and private data network. The government should find the middle ground in this, being fair and keeping proprietary tech protected.
Machine Ethics and Accountability
Ethics and responsibility arise as a matter of politics when robots are left to do their own things. Should robots be accorded legal status? Who ought to be the wrongdoer in the event of a robot acting wrongly? Fabric Protocol assigns a verifiable ID to every robot and logs the activities on the chain. This allows us to audit what has transpired but it does not resolve the question of accountability. Having no evident guidelines, manufacturers can pass on the responsibility to protocol, and the operator can blame the manufacturer. Shared governance should establish a sense of responsibility and ensure that it provides incentives that encourage safe actions. This could be in the form of staking, where the owners of robots pledged their bonds upon bad manner or insurance pools funded by network charges.
Ethics go beyond accidents. It is possible to use robots in the work which attracts some moral issues such as surveillance, police work or military work. Community governance will not prevent harmful uses when the token holders are primarily interested in profit. Unregulated use may be required to limit or prohibit certain uses. What is good is that fabric is open enough to catalyze good innovation: It can also be used to do bad things more readily. This is comparable to open-source software controversy: it provides lots of power to developers but also allows bad actors to use it.
One more issue that is ethically problematic is algorithmic bias. When the workload is chosen by token rewards, there is a possibility that the robot avoids low paying yet socially beneficial work, such as delivering medicine to the poor areas. Social values must be incorporated to task-assignment algorithms by governance. Perhaps some portion of rewards ought to compensate the unprofitable but necessary services. Such decisions are not technical.
Long-term effects: Worker and Machine Rights.
The more autonomy and economic power robots have, the more they may appear to be more than mere tools. There is a debate on whether advanced AI has moral consideration by philosophers and legal experts. Should robots be entitled to rights or representation in case they can make money and conclude contracts? And what will the human workers have to be when robots take over the economy? In the absence of a proactive policy, the transition may increase inequality and lead to unrest. Among them is a universal robot dividend or basic income as was discussed earlier. The other one is to keep humans in control of major decisions and to maintain some jobs (e.g. care giving) to humans only. The work, citizenship and rights of the society might need to be reconsidered in the long term when machines are part and not merely a property.
We can refer to other models of governance. To have a balance between expertise and inclusion, open-source software communities combine meritocracy and committees. Cooperative businesses avoid the concentration by the use of one person one vote. Quadratic voting and token-weight caps are employed as blockchain network limits to big holders. Fabric might attempt similar, such as providing local communities with veto power over the use of robots or voting power proportional to the contribution made rather than to the number of tokens one has. However, these concepts require an emphasis on community health in the long term, rather than on short-term returns to investors.
Other Protocol Comparisons and Community Comparisons.
In the quest to learn more about the governance decisions of Fabric, it is useful to consider other systems. The rules of Bitcoin are extremely simple: no formal voting or changes and only when the majority of miners and nodes switch the software, changes will occur.  Ethernet allowed individuals to make suggestions on the chain and organize amongst clients, although this is still dependent on people making agreements off-chain.  Fabric engages in token voting and a non-profit making foundation.  This can be compared to contemporary DAOs, except it has a company that runs it as well.
In contrast to Bitcoin, the tokens of Fabric provide actual economic privileges.  The foundation is more proactive in the development as opposed to Ethereum.  The mix has the ability to make decisions appear more central and yet, seem to be decentral.
It is instructive to compare it to open source communities such as the Linux kernel. Linux is maintained by few experienced professionals who select and analyze code.  The companies contribute money and computers, however, they do not determine what changes are to be included into the kernel.  People gain recognition through reputation, which is not by way of a token.  This system is in support of billions of infrastructure.  On the negative aspect, free-software projects may find it difficult to compensate individuals since they are dependent on volunteers.  The token of Fabric is an attempt to compensate the contributors, though, it attracts speculators.  It aims at maintaining open-source excitement and achieving a regular corporate investment without reducing a few individuals to control the discussion.
Competition and Geopolitics.
Outside the internal regulations, there are international politics of the robot economy.  Countries use robotics and AI as a strategic resource.  China, United States, EU, Japan are putting a lot of money in robot research and manufacture.  The global account and coin of fabric might emerge as an arena of struggle of power.  Governments may either prod or pull towards the use or the use of Fabric depending on its suitability with their objectives.  The protocol can be copied by some countries to retain control.  Foreign robots may have their path to the market blocked by others.  Companies such as Amazon and Tesla are developing their robots.  The open design of fabric has the potential of putting the giants on its toes, yet they can take advantage of their scale to bend the rules or create competing networks.
Standards could be coordinated with the assistance of international organisations.  International Organization of Standardization (ISO) is already authoring safety regulations to industrial robots.  The International Telecommunication Union (ITU) studies ethics of AI.  These organizations may establish the regulations governing the robot economy and exchange of data.  Unless they collaborate, it could be possible that there would be a lot of incompatible systems that decreases the growth of each other.
Democratic Governance and Policy Recommendations.
In order to minimize risks, I would propose the following few policies:
- Distribution of tokens must be different.  Quadratic voting method or stake limits or ageing voting methods can be used to ensure that a few individuals do not command the majority of the tokens.  Invest a large portion of tokens in social research and social program funds, financial institutions.
- Hybrid governance ought to combine token voting with councils with workers, local groups and regulators.  This assists in ensuring that decisions are made in a way that it takes into account everybody and not only money holders.
- There should be a demand of transparency.  Reporting on the ownership of tokens, the number of validators, and their decision-making process should be reported regularly.  The open information prevents secluded transactions and fosters credibility.
- Law systems must be transparent.  Collaborate with legislators to establish the parties responsible, ownership of data, and robots taxes and rights.  Certification programs will be able to ensure that the robots comply with the safety and ethical standards before they become part of Fabric.
- Privacy should be designed in.  Privacy usage like zero-knowledge proofs and store data locally when being shown to the user.  Provide means of deleting or anonymising data where requested by the law.
These recommendations are not an exhaustive list, but they demonstrate a need to have technology design and political design shift in a similar direction.
The success of Fabric will not be possible without it consulting legal experts, ethicists, worker groups, and lawmakers.
Conclusion
The regulation of the robot economy is not only a technical fact; it does determine whether Fabric will become a power sharing endeavor or support the existing structures. Majority tokens, in-group transactions, and watered-down law are actual threats, which require a cautious response in policy. In order to ensure that robots become beneficial to society, we need to develop equitable regulations, coordinate with others around the world, safeguard privacy and intellectual rights and establish social safety nets. The two-sidedness of fabric, where it is a non-profit organization and a token-issuing company, requires a clean sheet to avoid conflict. It must also co-operate with national and international institutions to address transnational regulations. Unless we address these political and ethical concerns, the robot economy can increase inequality but in the guise of being open. When we get it right, we will be able to build a future in which people and machines are success and influence makers. Politics of robots will rely on both our common choices and code.
$ROBO
#ROBO @FabricFND
·
--
Bärisch
·
--
Bärisch
Übersetzung ansehen
Exploring the vision of Fabric Foundation and how it powers intelligent automation across Web3. The innovation behind @Square-Creator-314140b9476c Foundation is creating real utility, and $ROBO plays a key role in driving decentralized AI solutions. Excited to see how $ROBO expands within the ecosystem and strengthens community governance. #ROBO $ROBO
Exploring the vision of Fabric Foundation and how it powers intelligent automation across Web3. The innovation behind @Fabric Foundation is creating real utility, and $ROBO plays a key role in driving decentralized AI solutions. Excited to see how $ROBO expands within the ecosystem and strengthens community governance. #ROBO $ROBO
Ich habe eine Woche damit verbracht, in das Mira Network einzutauchen. Hier ist, was ich tatsächlich gefunden habe Ich habe tagelang in diesem Projekt gewühltIch habe tagelang in diesem Projekt gewühlt. Hier ist, was real ist, was chaotisch ist und ob es von Bedeutung ist Also sitze ich hier um 23 Uhr an einem Donnerstag, drei Tassen Kaffee tief, starre auf Diagramme und Discord-Screenshots und versuche herauszufinden, ob das Mira Network tatsächlich auf etwas gestoßen ist oder ob ich einfach eine Woche meines Lebens verschwendet habe Das, was mich immer wieder zurückzieht, ist Folgendes: KI-Halluzinationen machen mir Angst Letzten Monat habe ich diesen kleinen Trading-Bot aus Spaß gebaut. Nichts Ernstes, nur herumspielen. Ich ließ ihn einige Marktstimmungen von Twitter analysieren. Die KI kam mit dieser perfekten Analyse über "wachsendes institutionelles Interesse" und "positive makroökonomische Signale" zurück. Klang clever. Ich hätte fast damit gearbeitet

Ich habe eine Woche damit verbracht, in das Mira Network einzutauchen. Hier ist, was ich tatsächlich gefunden habe Ich habe tagelang in diesem Projekt gewühlt

Ich habe tagelang in diesem Projekt gewühlt. Hier ist, was real ist, was chaotisch ist und ob es von Bedeutung ist
Also sitze ich hier um 23 Uhr an einem Donnerstag, drei Tassen Kaffee tief, starre auf Diagramme und Discord-Screenshots und versuche herauszufinden, ob das Mira Network tatsächlich auf etwas gestoßen ist oder ob ich einfach eine Woche meines Lebens verschwendet habe
Das, was mich immer wieder zurückzieht, ist Folgendes: KI-Halluzinationen machen mir Angst
Letzten Monat habe ich diesen kleinen Trading-Bot aus Spaß gebaut. Nichts Ernstes, nur herumspielen. Ich ließ ihn einige Marktstimmungen von Twitter analysieren. Die KI kam mit dieser perfekten Analyse über "wachsendes institutionelles Interesse" und "positive makroökonomische Signale" zurück. Klang clever. Ich hätte fast damit gearbeitet
·
--
Bullisch
Übersetzung ansehen
🔥 I’m giving away 1000 Gifts to my amazing Square family! This is your chance to grab a Red Pocket and celebrate together 🎁 ✅ Follow me ✅ Comment “DONE” Let’s make this huge 🚀 First come, first served! Follow me $BNB {spot}(BNBUSDT)
🔥 I’m giving away 1000 Gifts to my amazing

Square family!

This is your chance to grab a Red Pocket and

celebrate together 🎁

✅ Follow me

✅ Comment “DONE”

Let’s make this huge 🚀 First come, first served!
Follow me
$BNB
Melde dich an, um weitere Inhalte zu entdecken
Bleib immer am Ball mit den neuesten Nachrichten aus der Kryptowelt
⚡️ Beteilige dich an aktuellen Diskussionen rund um Kryptothemen
💬 Interagiere mit deinen bevorzugten Content-Erstellern
👍 Entdecke für dich interessante Inhalte
E-Mail-Adresse/Telefonnummer
Sitemap
Cookie-Präferenzen
Nutzungsbedingungen der Plattform