Cgroups allow artificially limiting CPU time available to a process using cpu.cfs_quota_us and cpu.cfs_period_us parameters.
This however results in a discrepancy when the program monitors its CPU usage (e.g. by comparing wall time and CPU time) and graceful degradation algorithms (such as decreasing quality of something realtime) may not kick in.
How do I make the program think it consumes 100% of CPU while limiting it with a cgroup policy?