GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
PMC.COREI7UC(3) FreeBSD Library Functions Manual PMC.COREI7UC(3)

pmc.corei7uc
uncore measurement events for Intel Core i7 and Xeon 5500 family CPUs

Performance Counters Library (libpmc, -lpmc)

#include <pmc.h>

Intel Core i7 CPUs contain PMCs conforming to version 2 of the Intel performance measurement architecture. These CPUs contain 2 classes of PMCs:
Fixed-function counters that count only one hardware event per counter.
Programmable counters that may be configured to count one of a defined set of hardware events.

The number of PMCs available in each class and their widths need to be determined at run time by calling pmc_cpuinfo(3).

Intel Core i7 and Xeon 5500 PMCs are documented in Volume 3B: System Programming Guide, Part 2, Intel(R) 64 and IA-32 Architectures Software Developes Manual, Order Number: 253669-033US, Intel Corporation, December 2009.

These PMCs and their supported events are documented in pmc.ucf(3).

The programmable PMCs support the following capabilities:
Capability Support
PMC_CAP_CASCADE No
PMC_CAP_EDGE Yes
PMC_CAP_INTERRUPT No
PMC_CAP_INVERT Yes
PMC_CAP_READ Yes
PMC_CAP_PRECISE No
PMC_CAP_SYSTEM No
PMC_CAP_TAGGING No
PMC_CAP_THRESHOLD Yes
PMC_CAP_USER No
PMC_CAP_WRITE Yes

Event specifiers for these PMCs support the following common qualifiers:
value
Configure the PMC to increment only if the number of configured events measured in a cycle is greater than or equal to value.
Configure the PMC to count the number of de-asserted to asserted transitions of the conditions expressed by the other qualifiers. If specified, the counter will increment only once whenever a condition becomes true, irrespective of the number of clocks during which the condition remains true.
Invert the sense of comparison when the “cmask” qualifier is present, making the counter increment when the number of events per cycle is less than the value specified by the “cmask” qualifier.

Core i7 and Xeon 5500 uncore programmable PMCs support the following events:
(Event 00H, Umask 01H) Uncore cycles Global Queue read tracker is full.
(Event 00H, Umask 02H) Uncore cycles Global Queue write tracker is full.
(Event 00H, Umask 04H) Uncore cycles Global Queue peer probe tracker is full. The peer probe tracker queue tracks snoops from the IOH and remote sockets.
(Event 01H, Umask 01H) Uncore cycles were Global Queue read tracker has at least one valid entry.
(Event 01H, Umask 02H) Uncore cycles were Global Queue write tracker has at least one valid entry.
(Event 01H, Umask 04H) Uncore cycles were Global Queue peer probe tracker has at least one valid entry. The peer probe tracker queue tracks IOH and remote socket snoops.
(Event 03H, Umask 01H) Counts the number of tread tracker allocate to deallocate entries. The GQ read tracker allocate to deallocate occupancy count is divided by the count to obtain the average read tracker latency.
(Event 03H, Umask 02H) Counts the number GQ read tracker entries for which a full cache line read has missed the L3. The GQ read tracker L3 miss to fill occupancy count is divided by this count to obtain the average cache line read L3 miss latency. The latency represents the time after which the L3 has determined that the cache line has missed. The time between a GQ read tracker allocation and the L3 determining that the cache line has missed is the average L3 hit latency. The total L3 cache line read miss latency is the hit latency + L3 miss latency.
(Event 03H, Umask 04H) Counts the number of GQ read tracker entries that are allocated in the read tracker queue that hit or miss the L3. The GQ read tracker L3 hit occupancy count is divided by this count to obtain the average L3 hit latency.
(Event 03H, Umask 08H) Counts the number of GQ read tracker entries that are allocated in the read tracker, have missed in the L3 and have not acquired a Request Transaction ID. The GQ read tracker L3 miss to RTID acquired occupancy count is divided by this count to obtain the average latency for a read L3 miss to acquire an RTID.
(Event 03H, Umask 10H) Counts the number of GQ write tracker entries that are allocated in the write tracker, have missed in the L3 and have not acquired a Request Transaction ID. The GQ write tracker L3 miss to RTID occupancy count is divided by this count to obtain the average latency for a write L3 miss to acquire an RTID.
(Event 03H, Umask 20H) Counts the number of GQ write tracker entries that are allocated in the write tracker queue that miss the L3. The GQ write tracker occupancy count is divided by the this count to obtain the average L3 write miss latency.
(Event 03H, Umask 40H) Counts the number of GQ peer probe tracker (snoop) entries that are allocated in the peer probe tracker queue that miss the L3. The GQ peer probe occupancy count is divided by this count to obtain the average L3 peer probe miss latency.
(Event 04H, Umask 01H) Cycles Global Queue Quickpath Interface input data port is busy importing data from the Quickpath Interface. Each cycle the input port can transfer 8 or 16 bytes of data.
(Event 04H, Umask 02H) Cycles Global Queue Quickpath Memory Interface input data port is busy importing data from the Quickpath Memory Interface. Each cycle the input port can transfer 8 or 16 bytes of data.
(Event 04H, Umask 04H) Cycles GQ L3 input data port is busy importing data from the Last Level Cache. Each cycle the input port can transfer 32 bytes of data.
(Event 04H, Umask 08H) Cycles GQ Core 0 and 2 input data port is busy importing data from processor cores 0 and 2. Each cycle the input port can transfer 32 bytes of data.
(Event 04H, Umask 10H) Cycles GQ Core 1 and 3 input data port is busy importing data from processor cores 1 and 3. Each cycle the input port can transfer 32 bytes of data.
(Event 05H, Umask 01H) Cycles GQ QPI and QMC output data port is busy sending data to the Quickpath Interface or Quickpath Memory Interface. Each cycle the output port can transfer 32 bytes of data.
(Event 05H, Umask 02H) Cycles GQ L3 output data port is busy sending data to the Last Level Cache. Each cycle the output port can transfer 32 bytes of data.
(Event 05H, Umask 04H) Cycles GQ Core output data port is busy sending data to the Cores. Each cycle the output port can transfer 32 bytes of data.
(Event 06H, Umask 01H) Number of snoop responses to the local home that L3 does not have the referenced cache line.
(Event 06H, Umask 02H) Number of snoop responses to the local home that L3 has the referenced line cached in the S state.
(Event 06H, Umask 04H) Number of responses to code or data read snoops to the local home that the L3 has the referenced cache line in the E state. The L3 cache line state is changed to the S state and the line is forwarded to the local home in the S state.
(Event 06H, Umask 08H) Number of responses to read invalidate snoops to the local home that the L3 has the referenced cache line in the M state. The L3 cache line state is invalidated and the line is forwarded to the local home in the M state.
(Event 06H, Umask 10H) Number of conflict snoop responses sent to the local home.
(Event 06H, Umask 20H) Number of responses to code or data read snoops to the local home that the L3 has the referenced line cached in the M state.
(Event 07H, Umask 01H) Number of snoop responses to a remote home that L3 does not have the referenced cache line.
(Event 07H, Umask 02H) Number of snoop responses to a remote home that L3 has the referenced line cached in the S state.
(Event 07H, Umask 04H) Number of responses to code or data read snoops to a remote home that the L3 has the referenced cache line in the E state. The L3 cache line state is changed to the S state and the line is forwarded to the remote home in the S state.
(Event 07H, Umask 08H) Number of responses to read invalidate snoops to a remote home that the L3 has the referenced cache line in the M state. The L3 cache line state is invalidated and the line is forwarded to the remote home in the M state.
(Event 07H, Umask 10H) Number of conflict snoop responses sent to the local home.
(Event 07H, Umask 20H) Number of responses to code or data read snoops to a remote home that the L3 has the referenced line cached in the M state.
(Event 07H, Umask 24H) Number of HITM snoop responses to a remote home
(Event 08H, Umask 01H) Number of code read, data read and RFO requests that hit in the L3
(Event 08H, Umask 02H) Number of writeback requests that hit in the L3. Writebacks from the cores will always result in L3 hits due to the inclusive property of the L3.
(Event 08H, Umask 04H) Number of snoops from IOH or remote sockets that hit in the L3.
(Event 08H, Umask 03H) Number of reads and writes that hit the L3.
(Event 09H, Umask 01H) Number of code read, data read and RFO requests that miss the L3.
(Event 09H, Umask 02H) Number of writeback requests that miss the L3. Should always be zero as writebacks from the cores will always result in L3 hits due to the inclusive property of the L3.
(Event 09H, Umask 04H) Number of snoops from IOH or remote sockets that miss the L3.
(Event 09H, Umask 03H) Number of reads and writes that miss the L3.
(Event 0AH, Umask 01H) Counts the number of L3 lines allocated in M state. The only time a cache line is allocated in the M state is when the line was forwarded in M state is forwarded due to a Snoop Read Invalidate Own request.
(Event 0AH, Umask 02H) Counts the number of L3 lines allocated in E state.
(Event 0AH, Umask 04H) Counts the number of L3 lines allocated in S state.
(Event 0AH, Umask 08H) Counts the number of L3 lines allocated in F state.
(Event 0AH, Umask 0FH) Counts the number of L3 lines allocated in any state.
(Event 0BH, Umask 01H) Counts the number of L3 lines victimized that were in the M state. When the victim cache line is in M state, the line is written to its home cache agent which can be either local or remote.
(Event 0BH, Umask 02H) Counts the number of L3 lines victimized that were in the E state.
(Event 0BH, Umask 04H) Counts the number of L3 lines victimized that were in the S state.
(Event 0BH, Umask 08H) Counts the number of L3 lines victimized that were in the I state.
(Event 0BH, Umask 10H) Counts the number of L3 lines victimized that were in the F state.
(Event 0BH, Umask 1FH) Counts the number of L3 lines victimized in any state.
(Event 20H, Umask 01H) Counts number of Quickpath Home Logic read requests from the IOH.
(Event 20H, Umask 02H) Counts number of Quickpath Home Logic write requests from the IOH.
(Event 20H, Umask 04H) Counts number of Quickpath Home Logic read requests from a remote socket.
(Event 20H, Umask 08H) Counts number of Quickpath Home Logic write requests from a remote socket.
(Event 20H, Umask 10H) Counts number of Quickpath Home Logic read requests from the local socket.
(Event 20H, Umask 20H) Counts number of Quickpath Home Logic write requests from the local socket.
(Event 21H, Umask 01H) Counts uclk cycles all entries in the Quickpath Home Logic IOH are full.
(Event 21H, Umask 02H) Counts uclk cycles all entries in the Quickpath Home Logic remote tracker are full.
(Event 21H, Umask 04H) Counts uclk cycles all entries in the Quickpath Home Logic local tracker are full.
(Event 22H, Umask 01H) Counts uclk cycles all entries in the Quickpath Home Logic IOH is busy.
(Event 22H, Umask 02H) Counts uclk cycles all entries in the Quickpath Home Logic remote tracker is busy.
(Event 22H, Umask 04H) Counts uclk cycles all entries in the Quickpath Home Logic local tracker is busy.
(Event 23H, Umask 01H) QHL IOH tracker allocate to deallocate read occupancy.
(Event 23H, Umask 02H) QHL remote tracker allocate to deallocate read occupancy.
(Event 23H, Umask 04H) QHL local tracker allocate to deallocate read occupancy.
(Event 24H, Umask 02H) Counts number of QHL Active Address Table (AAT) entries that saw a max of 2 conflicts. The AAT is a structure that tracks requests that are in conflict. The requests themselves are in the home tracker entries. The count is reported when an AAT entry deallocates.
(Event 24H, Umask 04H) Counts number of QHL Active Address Table (AAT) entries that saw a max of 3 conflicts. The AAT is a structure that tracks requests that are in conflict. The requests themselves are in the home tracker entries. The count is reported when an AAT entry deallocates.
(Event 25H, Umask 01H) Counts cycles the Quickpath Home Logic IOH Tracker contains two or more requests with an address conflict. A max of 3 requests can be in conflict.
(Event 25H, Umask 02H) Counts cycles the Quickpath Home Logic Remote Tracker contains two or more requests with an address conflict. A max of 3 requests can be in conflict.
(Event 25H, Umask 04H) Counts cycles the Quickpath Home Logic Local Tracker contains two or more requests with an address conflict. A max of 3 requests can be in conflict.
(Event 26H, Umask 01H) Counts number or requests to the Quickpath Memory Controller that bypass the Quickpath Home Logic. All local accesses can be bypassed. For remote requests, only read requests can be bypassed.
(Event 27H, Umask 01H) Uncore cycles all the entries in the DRAM channel 0 medium or low priority queue are occupied with read requests.
(Event 27H, Umask 02H) Uncore cycles all the entries in the DRAM channel 1 medium or low priority queue are occupied with read requests.
(Event 27H, Umask 04H) Uncore cycles all the entries in the DRAM channel 2 medium or low priority queue are occupied with read requests.
(Event 27H, Umask 08H) Uncore cycles all the entries in the DRAM channel 0 medium or low priority queue are occupied with write requests.
(Event 27H, Umask 10H) Counts cycles all the entries in the DRAM channel 1 medium or low priority queue are occupied with write requests.
(Event 27H, Umask 20H) Uncore cycles all the entries in the DRAM channel 2 medium or low priority queue are occupied with write requests.
(Event 28H, Umask 01H) Counts cycles all the entries in the DRAM channel 0 high priority queue are occupied with isochronous read requests.
(Event 28H, Umask 02H) Counts cycles all the entries in the DRAM channel 1 high priority queue are occupied with isochronous read requests.
(Event 28H, Umask 04H) Counts cycles all the entries in the DRAM channel 2 high priority queue are occupied with isochronous read requests.
(Event 28H, Umask 08H) Counts cycles all the entries in the DRAM channel 0 high priority queue are occupied with isochronous write requests.
(Event 28H, Umask 10H) Counts cycles all the entries in the DRAM channel 1 high priority queue are occupied with isochronous write requests.
(Event 28H, Umask 20H) Counts cycles all the entries in the DRAM channel 2 high priority queue are occupied with isochronous write requests.
(Event 29H, Umask 01H) Counts cycles where Quickpath Memory Controller has at least 1 outstanding read request to DRAM channel 0.
(Event 29H, Umask 02H) Counts cycles where Quickpath Memory Controller has at least 1 outstanding read request to DRAM channel 1.
(Event 29H, Umask 04H) Counts cycles where Quickpath Memory Controller has at least 1 outstanding read request to DRAM channel 2.
(Event 29H, Umask 08H) Counts cycles where Quickpath Memory Controller has at least 1 outstanding write request to DRAM channel 0.
(Event 29H, Umask 10H) Counts cycles where Quickpath Memory Controller has at least 1 outstanding write request to DRAM channel 1.
(Event 29H, Umask 20H) Counts cycles where Quickpath Memory Controller has at least 1 outstanding write request to DRAM channel 2.
(Event 2AH, Umask 01H) IMC channel 0 normal read request occupancy.
(Event 2AH, Umask 02H) IMC channel 1 normal read request occupancy.
(Event 2AH, Umask 04H) IMC channel 2 normal read request occupancy.
(Event 2BH, Umask 01H) IMC channel 0 issoc read request occupancy.
(Event 2BH, Umask 02H) IMC channel 1 issoc read request occupancy.
(Event 2BH, Umask 04H) IMC channel 2 issoc read request occupancy.
(Event 2BH, Umask 07H) IMC issoc read request occupancy.
(Event 2CH, Umask 01H) Counts the number of Quickpath Memory Controller channel 0 medium and low priority read requests. The QMC channel 0 normal read occupancy divided by this count provides the average QMC channel 0 read latency.
(Event 2CH, Umask 02H) Counts the number of Quickpath Memory Controller channel 1 medium and low priority read requests. The QMC channel 1 normal read occupancy divided by this count provides the average QMC channel 1 read latency.
(Event 2CH, Umask 04H) Counts the number of Quickpath Memory Controller channel 2 medium and low priority read requests. The QMC channel 2 normal read occupancy divided by this count provides the average QMC channel 2 read latency.
(Event 2CH, Umask 07H) Counts the number of Quickpath Memory Controller medium and low priority read requests. The QMC normal read occupancy divided by this count provides the average QMC read latency.
(Event 2DH, Umask 01H) Counts the number of Quickpath Memory Controller channel 0 high priority isochronous read requests.
(Event 2DH, Umask 02H) Counts the number of Quickpath Memory Controller channel 1 high priority isochronous read requests.
(Event 2DH, Umask 04H) Counts the number of Quickpath Memory Controller channel 2 high priority isochronous read requests.
(Event 2DH, Umask 07H) Counts the number of Quickpath Memory Controller high priority isochronous read requests.
(Event 2EH, Umask 01H) Counts the number of Quickpath Memory Controller channel 0 critical priority isochronous read requests.
(Event 2EH, Umask 02H) Counts the number of Quickpath Memory Controller channel 1 critical priority isochronous read requests.
(Event 2EH, Umask 04H) Counts the number of Quickpath Memory Controller channel 2 critical priority isochronous read requests.
(Event 2EH, Umask 07H) Counts the number of Quickpath Memory Controller critical priority isochronous read requests.
(Event 2FH, Umask 01H) Counts number of full cache line writes to DRAM channel 0.
(Event 2FH, Umask 02H) Counts number of full cache line writes to DRAM channel 1.
(Event 2FH, Umask 04H) Counts number of full cache line writes to DRAM channel 2.
(Event 2FH, Umask 07H) Counts number of full cache line writes to DRAM.
(Event 2FH, Umask 08H) Counts number of partial cache line writes to DRAM channel 0.
(Event 2FH, Umask 10H) Counts number of partial cache line writes to DRAM channel 1.
(Event 2FH, Umask 20H) Counts number of partial cache line writes to DRAM channel 2.
(Event 2FH, Umask 38H) Counts number of partial cache line writes to DRAM.
(Event 30H, Umask 01H) Counts number of DRAM channel 0 cancel requests.
(Event 30H, Umask 02H) Counts number of DRAM channel 1 cancel requests.
(Event 30H, Umask 04H) Counts number of DRAM channel 2 cancel requests.
(Event 30H, Umask 07H) Counts number of DRAM cancel requests.
(Event 31H, Umask 01H) Counts number of DRAM channel 0 priority updates. A priority update occurs when an ISOC high or critical request is received by the QHL and there is a matching request with normal priority that has already been issued to the QMC. In this instance, the QHL will send a priority update to QMC to expedite the request.
(Event 31H, Umask 02H) Counts number of DRAM channel 1 priority updates. A priority update occurs when an ISOC high or critical request is received by the QHL and there is a matching request with normal priority that has already been issued to the QMC. In this instance, the QHL will send a priority update to QMC to expedite the request.
(Event 31H, Umask 04H) Counts number of DRAM channel 2 priority updates. A priority update occurs when an ISOC high or critical request is received by the QHL and there is a matching request with normal priority that has already been issued to the QMC. In this instance, the QHL will send a priority update to QMC to expedite the request.
(Event 31H, Umask 07H) Counts number of DRAM priority updates. A priority update occurs when an ISOC high or critical request is received by the QHL and there is a matching request with normal priority that has already been issued to the QMC. In this instance, the QHL will send a priority update to QMC to expedite the request.
(Event 33H, Umask 04H) Counts number of Force Acknowledge Conflict messages sent by the Quickpath Home Logic to the local home.
(Event 40H, Umask 01H) Counts cycles the Quickpath outbound link 0 HOME virtual channel is stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 02H) Counts cycles the Quickpath outbound link 0 SNOOP virtual channel is stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 04H) Counts cycles the Quickpath outbound link 0 non-data response virtual channel is stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 08H) Counts cycles the Quickpath outbound link 1 HOME virtual channel is stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 10H) Counts cycles the Quickpath outbound link 1 SNOOP virtual channel is stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 20H) Counts cycles the Quickpath outbound link 1 non-data response virtual channel is stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 07H) Counts cycles the Quickpath outbound link 0 virtual channels are stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 40H, Umask 38H) Counts cycles the Quickpath outbound link 1 virtual channels are stalled due to lack of a VNA and VN0 credit. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 01H) Counts cycles the Quickpath outbound link 0 Data ResponSe virtual channel is stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 02H) Counts cycles the Quickpath outbound link 0 Non-Coherent Bypass virtual channel is stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 04H) Counts cycles the Quickpath outbound link 0 Non-Coherent Standard virtual channel is stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 08H) Counts cycles the Quickpath outbound link 1 Data ResponSe virtual channel is stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 10H) Counts cycles the Quickpath outbound link 1 Non-Coherent Bypass virtual channel is stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 20H) Counts cycles the Quickpath outbound link 1 Non-Coherent Standard virtual channel is stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 07H) Counts cycles the Quickpath outbound link 0 virtual channels are stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 41H, Umask 38H) Counts cycles the Quickpath outbound link 1 virtual channels are stalled due to lack of VNA and VN0 credits. Note that this event does not filter out when a flit would not have been selected for arbitration because another virtual channel is getting arbitrated.
(Event 42H, Umask 02H) Number of cycles that the header buffer in the Quickpath Interface outbound link 0 is busy.
(Event 42H, Umask 08H) Number of cycles that the header buffer in the Quickpath Interface outbound link 1 is busy.
(Event 43H, Umask 01H) Number of cycles that snoop packets incoming to the Quickpath Interface link 0 are stalled and not sent to the GQ because the GQ Peer Probe Tracker (PPT) does not have any available entries.
(Event 43H, Umask 02H) Number of cycles that snoop packets incoming to the Quickpath Interface link 1 are stalled and not sent to the GQ because the GQ Peer Probe Tracker (PPT) does not have any available entries.
(Event 60H, Umask 01H) Counts number of DRAM Channel 0 open commands issued either for read or write. To read or write data, the referenced DRAM page must first be opened.
(Event 60H, Umask 02H) Counts number of DRAM Channel 1 open commands issued either for read or write. To read or write data, the referenced DRAM page must first be opened.
(Event 60H, Umask 04H) Counts number of DRAM Channel 2 open commands issued either for read or write. To read or write data, the referenced DRAM page must first be opened.
(Event 61H, Umask 01H) DRAM channel 0 command issued to CLOSE a page due to page idle timer expiration. Closing a page is done by issuing a precharge.
(Event 61H, Umask 02H) DRAM channel 1 command issued to CLOSE a page due to page idle timer expiration. Closing a page is done by issuing a precharge.
(Event 61H, Umask 04H) DRAM channel 2 command issued to CLOSE a page due to page idle timer expiration. Closing a page is done by issuing a precharge.
(Event 62H, Umask 01H) Counts the number of precharges (PRE) that were issued to DRAM channel 0 because there was a page miss. A page miss refers to a situation in which a page is currently open and another page from the same bank needs to be opened. The new page experiences a page miss. Closing of the old page is done by issuing a precharge.
(Event 62H, Umask 02H) Counts the number of precharges (PRE) that were issued to DRAM channel 1 because there was a page miss. A page miss refers to a situation in which a page is currently open and another page from the same bank needs to be opened. The new page experiences a page miss. Closing of the old page is done by issuing a precharge.
(Event 62H, Umask 04H) Counts the number of precharges (PRE) that were issued to DRAM channel 2 because there was a page miss. A page miss refers to a situation in which a page is currently open and another page from the same bank needs to be opened. The new page experiences a page miss. Closing of the old page is done by issuing a precharge.
(Event 63H, Umask 01H) Counts the number of times a read CAS command was issued on DRAM channel 0.
(Event 63H, Umask 02H) Counts the number of times a read CAS command was issued on DRAM channel 0 where the command issued used the auto-precharge (auto page close) mode.
(Event 63H, Umask 04H) Counts the number of times a read CAS command was issued on DRAM channel 1.
(Event 63H, Umask 08H) Counts the number of times a read CAS command was issued on DRAM channel 1 where the command issued used the auto-precharge (auto page close) mode.
(Event 63H, Umask 10H) Counts the number of times a read CAS command was issued on DRAM channel 2.
(Event 63H, Umask 20H) Counts the number of times a read CAS command was issued on DRAM channel 2 where the command issued used the auto-precharge (auto page close) mode.
(Event 64H, Umask 01H) Counts the number of times a write CAS command was issued on DRAM channel 0.
(Event 64H, Umask 02H) Counts the number of times a write CAS command was issued on DRAM channel 0 where the command issued used the auto-precharge (auto page close) mode.
(Event 64H, Umask 04H) Counts the number of times a write CAS command was issued on DRAM channel 1.
(Event 64H, Umask 08H) Counts the number of times a write CAS command was issued on DRAM channel 1 where the command issued used the auto-precharge (auto page close) mode.
(Event 64H, Umask 10H) Counts the number of times a write CAS command was issued on DRAM channel 2.
(Event 64H, Umask 20H) Counts the number of times a write CAS command was issued on DRAM channel 2 where the command issued used the auto-precharge (auto page close) mode.
(Event 65H, Umask 01H) Counts number of DRAM channel 0 refresh commands. DRAM loses data content over time. In order to keep correct data content, the data values have to be refreshed periodically.
(Event 65H, Umask 02H) Counts number of DRAM channel 1 refresh commands. DRAM loses data content over time. In order to keep correct data content, the data values have to be refreshed periodically.
(Event 65H, Umask 04H) Counts number of DRAM channel 2 refresh commands. DRAM loses data content over time. In order to keep correct data content, the data values have to be refreshed periodically.
(Event 66H, Umask 01H) Counts number of DRAM Channel 0 precharge-all (PREALL) commands that close all open pages in a rank. PREALL is issued when the DRAM needs to be refreshed or needs to go into a power down mode.
(Event 66H, Umask 02H) Counts number of DRAM Channel 1 precharge-all (PREALL) commands that close all open pages in a rank. PREALL is issued when the DRAM needs to be refreshed or needs to go into a power down mode.
(Event 66H, Umask 04H) Counts number of DRAM Channel 2 precharge-all (PREALL) commands that close all open pages in a rank. PREALL is issued when the DRAM needs to be refreshed or needs to go into a power down mode.

pmc(3), pmc.atom(3), pmc.core(3), pmc.corei7(3), pmc.iaf(3), pmc.k7(3), pmc.k8(3), pmc.soft(3), pmc.tsc(3), pmc.ucf(3), pmc.westmere(3), pmc.westmereuc(3), pmc_cpuinfo(3), pmclog(3), hwpmc(4)

The pmc library first appeared in FreeBSD 6.0.

The Performance Counters Library (libpmc, -lpmc) library was written by Joseph Koshy <jkoshy@FreeBSD.org>.
March 24, 2010 FreeBSD 13.1-RELEASE

Search for    or go to Top of page |  Section 3 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.