MILOS is a physical laboratory, located in the UT San Antonio San Pedro I building in downtown San Antonio. Constructed in 2022, San Pedro I is a 167,000-square-foot, six-story facility located at 506 Dolorosa St. along San Pedro Creek, just east of UTSA’s Downtown Campus. MILOS space includes 10 dedicated workstations.
NVIDIA DGX Server (x1)
MILOS computing infrastructure is anchored by a dedicated NVIDIA DGX A100, delivering 5 petaFLOPS of AI performance with 8× NVIDIA A100 Tensor Core GPUs (40GB each) and a dual AMD Rome 7742 CPU (128 cores)—a state-of-the-art platform for AI workloads.
Lambda Vector Threadripper Pro Workstations (x4)
Each Vector is equipped with an AMD Threadripper Pro 5955WX and dual RTX A6000 GPUs (48GB each).
Dell Optiplex 7000 (x6)
Each Optiplex is powered by a 12th Gen Intel Core i7-12700, 32GB DDR4 memory, and a 512GB PCIe NVMe SSD, ideal for numerical experiments, code prototyping, and report preparation.
MILOS is a member of the MATRIX AI Consortium, UTSA’s leading organization for AI research. MATRIX offers extensive on-premises and cloud-based computational resources, providing substantial support to its members, including:
3 NVIDIA DGX Servers.
Each DGX station includes:
Intel Xeon E5-2698 v4 (20-core, 2.2 GHz) CPU
4× NVIDIA Tesla V100-DGXS-32GB GPUs
256GB ECC RDIMM DDR4 system memory
3 Lambda Blade Deep Learning Servers.
2 servers – Each with 8× NVIDIA Quadro RTX 6000 GPUs (768GB total GPU memory) and 2× Intel Xeon Gold 6230 CPUs (20-core, 2.10 GHz)
1 server – Equipped with 8× NVIDIA Quadro RTX 8000 GPUs (768GB total GPU memory) and 2× Intel Xeon Gold 5218 CPUs (16-core, 2.10 GHz)
The ARC Cluster is UTSA’s primary HPC system, comprising 169 compute/GPU nodes and 2 login nodes, powered by a mix of Intel Cascade Lake and AMD EPYC CPUs.
Compute and GPU Nodes
30 GPU nodes – Each with 2× Intel CPUs (20 cores each, total 40 cores), 384GB RAM, and 1× NVIDIA V100 GPU
5 GPU nodes – Each with 2× Intel CPUs (40 cores total), 384GB RAM, and 2× NVIDIA V100 GPUs
2 GPU nodes – Each with 2× Intel CPUs, 384GB RAM, and 4× NVIDIA V100 GPUs
2 GPU nodes – Each with 2× AMD EPYC CPUs, 1× NVIDIA A100 (80GB), and 1TB RAM
2 large-memory nodes – Each with 4× Intel CPUs (80 cores total) and 1.5TB RAM
1 large-memory node – Equipped with 2× AMD EPYC CPUs and 2TB RAM
1 node – Equipped with 2× AMD EPYC CPUs and 1TB RAM
5 nodes – Each with 2× AMD EPYC CPUs, 1× NEC Vector Engine, and 1TB RAM
High-Speed Connectivity and Storage
100 Gbps InfiniBand for fast interconnectivity
Two Lustre filesystems:
/home – 110TB storage
/work – 1.1PB storage
250TB of cumulative local scratch storage (approx. 1.5TB per compute/GPU node)
NVIDIA DGX Servers
3× NVIDIA DGX A100 systems, each featuring:
8× NVIDIA A100 (80GB) GPUs
2TB system memory