POSTED : March 4, 2018
BY : Concentrix Catalyst
Categories: API Management & Security,Data & Analytics
Edge computing is a business opportunity borne of necessity. Data from devices connected to the internet is expected to reach 403ZB in 2018—that’s a trillion, or 10^21, gigabytes. If this seems like a large amount of data, consider that this raw, behemoth amount of information must then be stored, transferred and processed—all before it can be fed into machine learning systems to create insight and predict an optimal course of action.
Edge computing arose to support the huge storage and bandwidth requirements of data generated by “smart” connected devices, known in aggregate as the Internet of Things (IoT). Consider these examples in industries at the forefront of data collection:
Every infant heartbeat, every pair of size 8 acid-washed jeans, every online transaction analyzed for fraud contributes to a picture that requires an incredible volume of data to paint.
Why does data volume matter? Datasets must be large in order for results to be accurate. If data sets are too small or are influenced by human error or bias, the resulting conclusions will be flawed. If those conclusions are vetted by human decision-makers, people may be able to spot incorrect conclusions based on insufficient data … but if inaccurate results are fed directly into an artificial intelligence environment, they can trigger actions down the line with serious human consequences — and no opportunity for correction. (Consider, for example, the implication of insufficient data for AI-driven traffic lights at a large intersection.)
Data volume is a critical function of artificial intelligence. But creating accurate, actionable insights with that data (which is the promise of artificial intelligence) comes with storage, network and processing requirements that exceed the capabilities of traditional computing.
In a traditional computing environment, all data is sent from a device — say, hours of video from a surveillance camera — to a central server. The server processes the information; identifies trends, anomalies and anything else for reporting; then outputs the resulting data or feeds it into a machine-learning system for analysis. In this scenario, all data is sent across a network for central processing.
Edge computing moves the work of data processing from a central server to the device itself. The new generation of smart devices come with processing capacity baked in. In this scenario, only processed data – results, however they are defined — are sent over the network. In the example of video surveillance, a smart device might relay only the video footage that identifies movement and send a small, compressed file to a central server rather than hundreds of hours of video in which nothing happens.
Autonomous intelligence at the edge of the network solves data volume issues, lowers network costs, boosts security, faster response times and mitigates connectivity issues for a better user experience.
Tech trend journal TG Daily describes three flavors of artificial intelligence that are differentiated by the sophistication of development:
Advances in edge computing have given rise to the realities of augmented intelligence, and ongoing developments continue to push the boundaries of autonomous intelligence. From data-based financial models that anticipate market fluctuations to semi-automated transportation systems that optimize fuel efficiency around rush hour, many industries will soon see disruption and advancement, thanks to autonomous intelligence. Supported by edge computing, these kinds of big-data based efficiencies are finally poised to enter the mainstream market.
Learn more about our latest work in cutting edge technology.
Tags: Artificial Intelligence, Digital, Edge, Intelligence