...
However, after all is said and done, Tegra 4 might have an Achilee's heel in the making. While the company representatives often stated that Tegra 3 competitiveness was stifled by the lack of 4G LTE supporting hardware (HTC One X+ 4G LTE smartphone combines Tegra 3 and Intel/ex-Infineon baseband chip), Tegra 4 might be suffering from another issue: lack of API support. Back in 2010, when Nvidia disclosed its roadmap for the future, it was stated that the desktop architectures will move inside Tegra as well, and that future parts such as Wayne (T40, Tegra 4) and Logan ("Wolverine", T50, Tegra 5) will offer computational capabilities.
We asked Nvidia what APIs (Application Programming Interface) are supported, since their competition from Qualcomm (Adreno 300 Series), ARM (Mali-T600) now supports DirectX 11.0, OpenCL, latest OpenGL, latest OpenGL ES etc. The response we got from Nvidia's Tegra 4 team was underwhelming:
"Kepler isn't in Tegra 4, but Tegra 4's GPU leverages multiple GeForce architectures and elements - including Kepler and others."
"Tegra 4 uses our most advanced mobile GPU ever. It adds amazing features like true HDR rendering and supports ultra-efficient computation, like what is done for the computational photography architecture and the first "Always On" HDR camera. It is our fifth generation mobile GPU and 6x the power of Tegra 3's GPU - a major leap in capability (for reference Tegra 3 was 1.5x the horsepower compared to Tegra 2)."
In order to understand this answer, you have to understand that original GeForce ULP used on Tegra 2 and Tegra 3 was based on the NV4X architecture (GeForce 6/7 Series), launched in 2004. This is the GPU architecture that powers all PlayStation 3 consoles (RSX is a 2005 GeForce 7800 GTX "NV47" chip with a 128-bit memory controller and continuously updated manufacturing process).
"Tegra 4 has 6x the number of GPU shader cores in Tegra 3 (72 vs. 12), and includes various cache and pipeline optimizations. In actual game testing -- with other factors at play, such as game code efficiencies, driver stacks, CPU processing, memory subsystem operations - users should see about 3x - 4x delivered performance improvements for graphics-based benchmarks."
While it is great to see HDR capabilities going from specialized hardware such as RED Epic and Scarlet cameras to commodity hardware such as Tegra 4 (naturally, capabilities widely differ, but the concept is the same), the troublesome part was lack of any color on API support, as it is obvious that Always On HDR was an Nvidia feature and not exactly exposed to 3rd party developers. The last part of answer brought an answer to our question:
"Today's mobile apps do not take advantage of OCL (OpenCL), CUDA or Advanced OGL (OpenGL), nor are these APIs exposed in any OS. Tegra 4's GPU is very powerful and dedicates its resources toward improving real end user experiences."
To put this answer in perspective, Nvidia - a company almost always known for innovation in the desktop and mobile computing space - does not consider that API's such as OpenCL and its own CUDA are important for ultra-efficient computing. This attitude already resulted in a substantial design win turn sour, as the company was thrown out of BMW Group, a year and a few quarters after it triumphantly pushed Intel out of BMW's structure.
...
Witnessing that the company does not believe in CUDA (currently), does not believe in OpenCL (currently) and furthermore, has an issue with believing in success of Windows RT / Windows 8 as it does not support DirectX higher than 9. This will probably end up being painfully exposed through next-generation cross-platform benchmarks from Futuremark (next-gen 3DMark is cross-platform: WinRT/8, Android, iOS) and Rightware (Basemark X), all coming out in the upcoming weeks.
Will the company change the tune with Logan (T50, Tegra 5) or Stark (T60, Tegra 6), future chips based on Project Denver, a custom 64-bit ARM core in which the company invested lot of resources (and credibility) and a Maxwell GPU core, or are we going to see "a backup version" with 64-bit Cortex-A57 and 1xx-core GPU based on combination of architectures? Only time will tell.