This is the Windows app named MACE whose latest release can be downloaded as v1.1.1.zip. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named MACE with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start any OS OnWorks online emulator from this website, but better Windows online emulator.
- 5. From the OnWorks Windows OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application and install it.
- 7. Download Wine from your Linux distributions software repositories. Once installed, you can then double-click the app to run them with Wine. You can also try PlayOnLinux, a fancy interface over Wine that will help you install popular Windows programs and games.
Wine is a way to run Windows software on Linux, but with no Windows required. Wine is an open-source Windows compatibility layer that can run Windows programs directly on any Linux desktop. Essentially, Wine is trying to re-implement enough of Windows from scratch so that it can run all those Windows applications without actually needing Windows.
SCREENSHOTS:
MACE
DESCRIPTION:
Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes obligatory when running a model. Mechanism like automatically breaking OpenCL kernel into small units is introduced to allow better preemption for the UI rendering task. Graph level memory allocation optimization and buffer reuse are supported. The core library tries to keep minimum external dependencies to keep the library footprint small.
Features
- Model protection has been the highest priority since the beginning of the design
- Various techniques are introduced like converting models to C++ code and literal obfuscations
- Good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based chips
- CPU runtime supports Android, iOS and Linux
- TensorFlow, Caffe and ONNX model formats are supported.
- Runtime is optimized with NEON, OpenCL and Hexagon
Programming Language
C++
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/mace.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.