Thursday, May 24, 2018

QUADRANT PROTOCOL

QUADRANT PROTOCOL


Vast amounts of authentic data are needed to power today’s algorithms, however the current
data economy is fraught with problems. There is an ever-widening gap between those with the
resources to collect and store their own data and those that do not. The data these have-nots
do have access to is often fragmented and of questionable authenticity—the kind of data that
produces poor results when fed to algorithms. Part of the reason why the data lacks authenticity is
because the suppliers of it are not properly incentivized. Fair revenue distribution does not exist for
both data producers and vendors. Without a healthy and transparent data economy, the increasing
demand for authentic data will not be met.

Quadrant aims to solve these problems by providing a blueprint for mapping disparate data
sources. It will support proof of data authenticity and provenance via data stamping, the creation
of “Constellations” (data smart contracts) for disparate data sources, and fair remuneration and
incentive sharing. Data Consumers can trust the authenticity of the data they purchase, “Nurseries”
(Data Producers) are compensated fairly every time their data is used, and “Pioneers” (Data Vendors)
have the incentive to create innovative Constellations. This new transparent ecosystem ensures
that companies get the authentic data they need.
Where Quadrant has major potential for impact is the ability it provides “Elons” (the brightest data
minds) to find linkages between different constellations and, in turn, create mega Constellations
that can be used by Data Consumers to solve real-world problems. This is where Quadrant
differentiates itself from its competitors.
Quadrant is designed to work with both centralised and decentralised services. The architecture
consists of the core Quadrant blockchain, clients (Data Producer, Data Consumer and Anchor), and
Guardian Nodes. Quadrant will operate on a Proof of Authority consensus mechanism so that it
can handle more transactions, operate at a lower gas price, achieve faster transactions, and restrict
malicious nodes from entering data into the network. An external Proof of Work chain will be used
as an anchor for security purposes. For the time being, the Ethereum blockchain will be used for
anchoring but it can be replaced by any public chain in the future if needed.
Quadrant will utilise two different currencies for its network: eQuad and QUAD. QUAD, a utility
token, is designed to be used solely on the network. It will be used to stamp data, support simple
and complex access structures, simple and complex subscription payments, and for staking by
Elons. eQuad is an ERC-20-compliant token that will be sold during the Token Generation Event
(TGE). It may be converted into QUAD via a gateway when the Quadrant mainnet is launched.
The TGE will have a hard cap of $20,000,000 USD. If the TGE raises over $7,000,000 USD, the
contributions garnered will be locked initially and made transferrable over the course of 4 years,
with 40% becoming available upon the close of the eQuad token sale and the remaining 60% released annually to be applied towards the Company's objects at a fixed rate of 15%. This is
intended to ensure the long-term success of Quadrant while instilling practicality to ensure no
over-spending in the initial years.
1,000,000,000 eQuad will be created during the TGE. The tokens will be distributed as follows:
40% to the crowd-sale, 20% to be held by Company, 20% to the Stakeholders, 10% to the Reserve,
and 10% to the Team.

The Problems Facing the Data Economy

There are four major problems facing the data economy:
1. A widening AI data gap between the haves and have-nots
2. Ubiquitous fake and unauthentic data destroying the usefulness of any algorithm
3. Unsustainable data feeds breaking production systems when they go offline
4. Unfair revenue distribution for the original data producers



Fake and Unauthentic Data

It is a universal truth that where there is money to be made, people will try to game the system.
This is no different in the data economy. Data is easy to fake, copy and misrepresent. This makes it
difficult for Data Consumers to properly vet data when purchasing it from third parties.
When purchasing data, Data Consumers want to follow applicable laws and regulations. They do
not want to be unethical in their business practices. There is too much at stake if they should be
found engaging in unethical practices. Hence from a business perspective, they do not want to pay
for data that is not authentic. From a regulatory perspective, they need to know where their data
comes from.
Data Vendors are not always so scrupulous. Some are happy as long as they make their money.
Data Consumers, therefore, are not always getting the authentic data that their systems depend
on. Unauthentic data input equals a poor resulting output. It is the case for data-driven business
decisions, algorithmic trading, AI/ Machine Learning applications and oracles for smart contracts.
If the data is false, the consequences can be dire.

Unsustainable Data Ecosystem

Free data is not sustainable. No entity can continually produce data over the long-haul if they are
not being compensated fairly for it, either directly or indirectly.
Regarding data that is exchanged for monetary compensation—without proper revenue sharing,
individuals and businesses would not be able to keep their doors open and continue to provide
data if they are not compensated fairly. This is critical for keeping data streams diverse and
authentic; things like IoT sensors properly maintained. Ultimately, it is the small Data Consumers
that suffer the most because they lose access to these data streams and may not have ready access
to alternatives.
What is needed is a sustainable ecosystem where producers are incentivised to provide authentic
data and buyers are willing to pay for it.

Fair Revenue Distribution
Producers of the original data sources have it the worst with respect to revenue distribution. They
need to be incentivised to continue producing data, yet more often than not they are paid just
once for the data that they provide. It is the Data Vendors that have the ability to resell the same
data again and again. There is no way for the producers to find out what happens to the data
downstream, where it goes and for what purpose. What this does is cast an opaque layer over the
data, so that the producers have no idea of how much money they are owed.
Stakeholders
What each of the above problems does is plague the stakeholders of the data economy: the
consumers, the vendors and the entities producing the data.
Data Consumers
Organisations are embracing data analytics, data science, machine learning and AI in more
sophisticated ways than ever before. They are either using their own in-house capacity or looking
to firms specialising in data.
In either case, organisations are likely to go through a data adoption cycle similar to the Gartner Hype Cycle:
1. Data Trigger – “I need data! Where can I get it?”
2. Peak of Inflated Expectations – “I will pay you anything for it!”
3. Trough of Disillusionment – “Wait, this data doesn’t really do what I want!”
4. Slope of Enlightenment – “If we improved the data’s x, y, z, it will work wonders!”
5. Plateau of Productivity – “Okay, I now have the data from multiple sources at a price I can afford.”
Depending on the current state of an individual market segment, companies might find themselves
in multiple stages at once. It is not always a perfectly linear progression.
For example, in the Mobile Location Intelligence data segment, it can be seen that most participants
are currently in stage 3 or 4. Conversely, in the broader data economy, many companies are only
starting to learn about the challenges that exist with their current data sources and find themselves
transitioning from stage 2 to stage 3.
Stage 3 is the most critical, as it can make or break a company and its solution. If they are unable to
obtain authentic data, they will be forever chasing the dream of using data to solve a real problem.
Once companies reach stage 5, provenance becomes the highest priority. They need to be sure
that they are paying for authentic data that will support the business.


Data Vendors
For many Data Vendors, the path to data monetisation is a journey rather than a set of linear
steps. Initially, Data Vendors struggle to find a proper product-market fit for their data. They create
multiple products over time until one proves successful to a consumer group. When this happens,
they then seek to maximise revenue from the product.

While the replication and distribution of data is relatively cheap, production costs can be high.
It is essential that Data Vendors are able to cover the capital and input costs incurred during the
creation and productising of their data assets because once the data leaves their walls, it can be
duplicated at almost no cost.
Data Vendors have no desire to incur significant costs to create a data product, only to have it
duplicated and made available by competitors at a lower cost. They want to receive fair pay for the
products that they produce and want their products to be used in ways that even they could not
think of in order to maximise their revenue. They would also like to know who is utilising their data
because it helps them to understand the different ways in which their data can be used; it can even
motivate them to enrich their data further.

Atomic Data Producers (ADPs)

At this level of the data value chain, the biggest problem is that the ADPs are not paid their fair
share of the revenue made by the data that they produce. Individual data has little value on its own.
Its real value is derived when it is combined with other data sets. As a result, most data producers
will sell their data up the value chain to aggregators and resellers who can sell interesting data sets
alongside one another to multiply the impact of the insights.
The problem for ADPs is that they receive payment only once, no matter how many times the
data is resold via the resellers and aggregators. Each additional sale beyond the initial transaction
(between the ADP and the reseller) does not translate into revenue for the ADP.
That is not the only thing working against ADPs. With existing data transaction architectures,
there are prohibitive costs incurred in compensating ADPs for the data that they provide. Take a
CSV file that has thousands of medical prescriptions sourced from multiple ADPs as an example.
Figuring out the exact percentage of revenue to share amongst the contributing ADPs is inherently
cumbersome and expensive.

Cross Pollination

It is rare for data to be transmitted directly from producer to end-user. No one producer has all the
data, so data needs to be aggregated from different sources to be of value. Consider Bloomberg and
Yahoo Weather: Bloomberg does not create all of the information available through its terminals,
while Yahoo does not have weather stations in every country. For these services to work, they
need to aggregate data from a variety of sources.

In almost all data transactions, there is a long value chain that starts with the ADP producing data
in its rawest form. This data is then collected by a data value-added reseller (dVAR) who collects,
aggregates and processes the data to produce a data product. The final product may end up being
aggregated by another dVAR or being the data product sold to the end Data Consumer.
The problems start when even the aggregators do not have all the required data. The Data
Consumers will buy from multiple aggregators. Hence when these aggregators have similar sources,
the Data Consumer ends up buying duplicate data resulting in a waste of money and time.
Aggregation on its own is not a bad thing. It is essential to the data economy because it fulfils
market needs. But when the data sources are hidden, cross pollination limits the effectiveness of
the aggregated data.
One sees many examples of this across industries and it is very prevalent in the areas of adtech and
location analytics.

The Quadrant Protocol

The Quadrant Protocol is envisaged as a blockchain-based network protocol that enables the access
to, creation and distribution of data products and services with authenticity and provenance at its
core. It is envisioned to act as a blueprint that provides an organised system for the utilisation of
decentralised data.
Quadrant maps disparate data sources so that new, innovative data products can potentially be
created to help companies meet their data needs.
This is intended to be made possible through the participation of the following stakeholders:
•• Nurseries— the Atomic Data Producers (ADPs) that create the original data records. They
create Stars (raw data), which can then be grouped into Constellations.
•• Pioneers— the Data Vendors that create data products with the smart contracts on
Quadrant.
•• Elons— the visionaries that utilise the created data products and with them, build new and
unique products and services. They rely on Constellations and Constellation blueprints to
make sense of the data space, which they will travel through.
•• Guardians— the master nodes that protect the integrity of the chain, ensuring that it is not
compromised. The Guardians ensure that the Constellations created by the Pioneers are
not compromised and provide the services of stamping, authenticating and verifying data.


Features
Quadrant is intended to have the following features that are aimed at helping to solve the problems
in the data economy:
•• Proof of Data Authenticity and Provenance
•• Constellations for Disparate Data Sources
•• Fair Remuneration and Incentive Sharing




TOKEN SALE OVERVIEW

Soft Cap/ Hard Cap
$20 million USD hard cap (May be updated to peg to ETH), $3 million USD soft cap
Currencies for Buying Token
ETH
Price
$0.05 USD = 1 eQuad (ETH will be pegged the day before sale)
Who Can Participate
Whitelisting process. No citizens of United States of America, Canada, New Zealand, People's Republic of China and the Republic of Korea, or participants who fail to successfully pass KYC/AML checks
eQuad Supply
1,000,000,000 eQuad
eQuad Distribution
40% Crowd-sale, 20% to be held by Company, 20% Stakeholders, 10% Reserve, 10% Team
Type Of Token
ERC-20
Public Sale
To Be Announced


ROAD MAP

TEAM


So my review about Quadrant Protocol, Please visit and get related info via Link below :





By : @Jengger_jali
Profile Bitcointolk : https://bitcointalk.org/index.php?action=profile;u=1912849

No comments:

Post a Comment

MPCX INTRODUCTION