AIVON

Abstract
AIVON will open this blockchain protocol and ecosystem for use by third-parties, including content distributors, publishers and advertisers. This will help AIVON achieve broad adoption and network effects benefiting all participants. The blockchain implementation will achieve this with smart contracts between the content owners, advertisers, distributors and service providers. The smart contracts will maximize utilization by enabling dispatch of jobs to the most productive service providers and grant incentives to promote better quality
AI Computer Vision (CV) algorithms running on nodes using CPU/GPU resources will be used to scan media files, generate enhanced metadata including time-coded tags, classification, categories, transcripts and translations, and an index of the video objects. Humans with expertise in tagging, editing, moderation, transcription and/or translation can participate in the AIVON shared economy to help with the verification, validation and/or creation of video metadata. AI Machine Learning (ML) algorithms will continually learn from the actions of the AIVON community to become better and smarter. AIVON will empower the community with tools to enable the community to moderate, review, verify the meta-tags, categorize, transcribe and/or translate content, as well as provide economic incentives to encourage this activity through AVO Tokens
One of the first DApps to be built on top of the AIVON protocol will be an Open Video Search Engine (OVSE) that will offer a transparent and ubiquitous index and search engine for online video curated and maintained by the community and governed by consensus through a Decentralized Authority Organization (DAO). As metacrawler bots are to Google, AIVON's AI Computer Vision engine will act as crawlers of video to generate rich video metadata that will then be indexed and searchable through the OVSE. With video representing 79% of all Internet traffic and growing and video becoming increasingly fragmented as more video sites go online, AIVON believes the need for OVSE will be in high demand as video becomes harder to find and discover. While YouTube is viewed as a video search engine, it is really a video hosting platform and one can only find videos that users upload to YouTube, which is mostly user-generated and long tail content. Most premium content publishers typically do not upload their premium video content to YouTube, preferring to upload to their own video site or app. Our current company iVideoSmart (IVS) already has a world-class platform for content streaming and delivery and innovative video advertising technologies. IVS has a global reach of over 500 million addressable users by end 2018 who will all be able to participate in the AIVON shared economy community giving AIVON instant scale plus mass usage of the AIVON utility token.
Problem Statement
Yet, online video is everywhere and YouTube is no longer the only destination for online video. Facebook, Google Plus, Twitter and many other sites rely increasingly on video to attract users. And let's not forget viral video apps such as Instagram and Snapchat Stories. Video is becoming a standard component in most websites, indistinguishable from text and graphics. But sadly, most video is not indexed in any meaningful way
These are just some of the problems which have arisen due to the massive expansion of video use online. With the deluge of videos being uploaded and consumed, searching for and/or discovering content becomes harder and harder.
Video Content Is Opaque And Not Searchable
What is needed for working with the opaque content of video is the application of AI computer vision, such as facial recognition, to video indexing. Once the AI understands what a face is, a human can further guide the AI by teaching it to recognize specific faces (e.g. to associate different characteristics and details of each face with a specific tag, such as balding, or person's name). Once a dataset of faces is built, the AI can then compare video images with this dataset and identify specific faces, such as a popular celebrity or a known criminal. This same method can be used to recognize objects (such as a gun), landmarks (such as the US White House) and action scenes (such as man jumping).
Google, Bing and Yahoo search engines work by indexing the textual content of pages. These search engines have two major functions: crawling and building an index, and providing search users with a ranked list of the websites they've determined are the most relevant. The crawling function not only allows them to locate obscure content, but is able to rank it based on the number of inbound links or 'backlinks'.
Moreover, it is uncertain whether metadata accessible to a search engine applies to specific scenes or the video as a whole. This is due to lack of indexes at scene level which describe the content in temporal terms, with timecode references for each categorization. In addition, many types of content do include closed captions, which are transcriptions of the audio portion of the content indexed with timecode and embedded into the video recording as machine readable data that can be displayed on screen. However, such transcription information doesn't usually play a role in metadata. Therefore, search technology can be applied to captioned videos to provide scene level indexing as well as to describe the content as a complete work.
Author (rawon ayam)
#Aivon #aivonico #tokensale #AI #Blockchain #aivonio

Comments

Popular posts from this blog

Erosion

LIPCHAIN ICO

COINDOGS