Edge computing is defined by Wikipedia as: "A distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.". A summary of the first few lines can be condensed to: "The origins of edge computing lie in content delivery networks that were created in the late 1990s to serve web and video content from edge servers that were deployed close to users. In the early 2000s, these networks evolved to host applications and application components at the edge servers, resulting in the first commercial edge computing services that hosted applications such as dealer locators, shopping carts, real-time data aggregators, and ad insertion engines. Modern edge computing significantly extends this approach through virtualization technology that makes it easier to deploy and run a wider range of applications on the edge servers."
Interpretation of definition
As the Wikipedia page further describes reduced latency and using traditional data centers as a comparison of what edge computing isn't my interpretation is that edge computing is placed somewhere in between a (often the only) data center serving the application and the client computing device. Although the ultimate degree of distribution would be all the way to the client for applications that run on the client device (or in the client browser) we typically call this client applications.
It seems that in some definitions edge computing is considered to be all devices, and in some it is connected with IoT. In some models IoT devices connects to the edge and the edge in turn is connected to cloud data centers or with a layer in between sometimes called "fog". Where to draw the line seems to be blurry at best.
Content delivery networks might be viewed as special purpose edge application (a caching application) that evolved early and that over time has evolved to a very mature and well packaged service that can be used for many applications. One of the strengths of CDN's is that they have been optimized over many years and this optimization has lead to very specific architectures and infrastructures created to serve this application extremely efficient. If caching is what you need it is very likely that designing your own edge application for this is a waste of time.
The optimization that has gone into CDN's also means that they seem to struggle to offer great general platforms for edge computing although the trend seems to be moving into this area for several vendors.
Cloud vendors come from the other side of the spectrum, after successfully moving computing from the business private data centers they have gradually added more and more data centers to cover the earth. As they have expanded they have also added services that directly compete with CDN vendors and launching services to simplify building edge applications.
The cloud vendors have all the flexibility you could ask for in their data centers, and as they expand with more and more data centers the boundary is increasingly blurred, in how many data centers would your application need to run before being an edge application?
As one of the greatest defining features of an edge application is that it is closer to the user to reduce latency and improve the customer experience (while getting a bunch of other benefits and some problems at the same time) the obvious question is how much distribution is really needed?
The CDN vendors have traditionally talked about having thousands, even hundreds of thousands of points of presence (POP's) to distribute content. This extremely fine network is something that can be leveraged for an edge application as well.
One part of the latency isn't addressed, the client first/last mile latency before reaching high speed backbone connections. Assuming that we don't really call it an edge application if we run in a single data center serving a globally distributed application the latency distance that can be optimized is from the client connection to the backbone to wherever the data center is. For global applications the majority of this latency for a single data center application is going to be if you have to cross continents and oceans to get the data.
So... putting the application in a small set of well chosen data center locations is going to realize most of the potential latency improvement. But as the application has been distributed to a handful of locations the latency benefits of distributing it further rapidly diminishes. The benefit of going from one to five locations will be huge, going from five to ten location will be much less noticeable and adding further locations will only bring very small latency improvements.
My best guess is that the cloud vendors (eg: the big three) will win this battle by offering better flexibility for the customers by providing a simple framework for application distribution to a more than good enough (for most customers). Only a few very special edge applications will move to CDN like networks for a much more granular distribution. The cloud vendors also have the strongest acceptance and best relations with the developers needed to build these applications, which will be an important factor.