Download andrew color driver cmu






















Megan Monaghan Rivas and Tlaloc Rivas. Carnegie Mellon University. Andrew Smith. Terrence Mosley. City Theatre Pittsburgh.

Stuart Carden. Main memory is another source of interference that can be significant. In fact, the experiments shown in the figure above is due to memory interference and not cache interference.

Main memory is divided into regions called banks. This banks in turn are organized into rows and columns. Whenever a task running in a core tries to access a memory address in main memory first this address is analyzed to extract three pieces of information out of specific bits in the memory address : i the bank number, ii the row number, iii the column number.

The bank memory is used to select the bank where the memory block is located. Then the memory controller loads the row from that bank into a row buffer within the bank for faster access.

Finally the memory block is access in the column indicated by the column number from the row buffer. This can be seen in the figure below. Because the memory controller is optimized to improve the memory accesses per second, it takes advantage of the row buffer and favors the memory accesses that go to the same row.

Unfortunately this means that when a task 1 in one core is accessing a row already loaded in the row buffer while a task 2 running in another core is trying to access another row in the same bank, the access from task 2 can be move back in the memory access queue by another more recent access from task 1 to the already loaded row multiple times, creating an important delay for task 1.

Memory bank partitions are created by mapping the memory of the different tasks to different memory banks. This way each task can have its own bank and row buffer an no other task will modify that buffer or the queue of memory accesses to this bank. Because caches and memory banking technologies were not developed together, more often than not, their partitions intersect each other. In other words, it is not possible to select a bank color independent from a cache color because the selection of a cache color may limit the number of bank colors available.

This is because in some processor architectures, the address bit used to select a bank and the bits used to select a cache set share some elements. To illustrate this issue consider a memory system with four banks and four cache sets. In this case, we need two address bit to select a bank and two bits to select a cache set. If they were independent then we would be able to select four cache colors for each bank color we can select for a total of 16 combinations.

This can be seen as a matrix a color matrix where rows are cache colors and columns are bank colors. This means that in the color matrix some of the cells will not be real. This is shown in the figure below. Unfortunately, the number of partitions that is possible to obtain with page coloring is limited. For instance, for an Intel i7 processor it is possible to obtain 32 cache colors and 16 bank colors. Given that, in practice, we may have a larger number of tasks say , these number of partitions may prove insufficient for a real system.

As a result, it is important to also enable the sharing of partitions whenever the memory bandwidth requirements of tasks allows it.

However, this sharing must be done in a predictable way ensuring that we can guarantee meeting the tasks deadlines. At the same time it is important to avoid pessimistic over-approximations in order not to waste the processor cycles that we were trying to save in ther first place. For this case we developed an analysis algorithm that allows us to verify the timing interference of private and share memory partitions. Beyond solving the resource sharing problem we also need to enable the execution of parallelized tasks.

For this we have developed a global EDF scheduling algorithm for parallelized tasks with staged execution. These tasks generate jobs composed of a set of sequential stages that in turn are composed of a set of parallel segments. This segments are allowed to run in parallel to each other provided that all the segments from the previous stage have completed or the task has arrived for the first segments of the first stage.

Our algorithm allows us to verify the schedulability of these tasks with a global EDF scheduler. Beyond EDF, it is possible to use this algorithm with a global fixed priority scheduler with synchronous start, harmonic periods and implicit deadlines, a common configuration used by practitioners.

Today's complex CPS are built with the help of analysis algorithms that verify critical properties from different perspectives.

These perspectives include timing guarantees to ensure that the software reacts in sync with the physical processes e. While multiple scientific domains keep developing analytic techniques and tools to support the development of CPS, they evolve mostly indepedenlty from each other.

This is due to the need to create abstractions that focuses on the concerns of a specific domain while eliminating details that are not significant to the concern at hand. Unfortunately, these details can be critical to other domains and analytic tools. For instance, processor frequency scaling and other power-reducing techniques are used to reduce the battery energy consumption by reducing the voltage demanded from a battery.

This interacts with new battery charging management techniques that changes the interconnections between battery cells to provide different voltages. These interconnections ultimately interact with the thermal dissipation characteristics of the battery that may even lead to overheating and a fire.

Across all these three different concerns different scientific domains provide analytic tools to calculate the processor voltage, schedule battery cell interconnects, and analyze the thermal dissipation charatectistics but they use abstractions that ignore each others details and hence their interactions. To solve this problem we have developed the analysis contracts. An analysis contract is built on an architectural model that hosts multiple analytic domains supporting multiple analytic techniques.

Within this model an analysis contract specifies the part of the model that an analysis reads and write, the assumptions about the model and the guarantees it provides on the model once the analysis is run successfully. A contract verification engine reads these contracts and identifies the analyses that interact with each other and then the assumptions of an analysis are verified against the guarantees of the analysis it depends on to ensure that they are satisfied.

In particular, these assumptions and guarantees are described in contract formulas that combine first order logic and linear temporal logic that are verified using SMT and model checkers.

In order to enable this verification, a interaction domain that is able to capture the inter-domain interactions within the modeling language of the model checker. This allows us to create the sorts that we describe in the contract formulas such as threads, battery cells, periods, deadlines, charge level, etc, as well as functions that relate them together to build the appropriate formulas see figure below. In this podcast, Andrew Mellinger, a senior software developer in the SEI's Emerging Technology Center discusses work to develop a platform to organize dynamic defenses.

All those defenses, what they evoke is this kind of big monolithic, static set of walls, OK? Within enterprise networks, what we find is that that gives a lot of opportunity to our attackers to understand what we do. Dynamic network defense or moving target defense is based on a simple premise: a moving target is harder to attack than a stationary target.



0コメント

  • 1000 / 1000