About
Management of cross-functional and multi-geography teams. Technical contributor.…
Articles by Blaine
Contributions
Activity
-
I got this nice note today from @Joanne Dolan and the #TeenTurn team: Thank you again for volunteering your time and expertise on Saturday . It's…
I got this nice note today from @Joanne Dolan and the #TeenTurn team: Thank you again for volunteering your time and expertise on Saturday . It's…
Liked by Blaine Gaither
-
Don't blink or you'll miss me (I'm the one in the hoodie).
Don't blink or you'll miss me (I'm the one in the hoodie).
Liked by Blaine Gaither
-
Dr. Little has left the manufacturing line. But his law (N = λR) remains forever.
Dr. Little has left the manufacturing line. But his law (N = λR) remains forever.
Liked by Blaine Gaither
Experience
Education
-
New Mexico Institute of Mining and Technology
Activities and Societies: Association for Computing Machinery (Chapter Treasurer); Pi Mu Epsilon; Tech Scholar
Publications
-
So You Think That You Want to Model Something?
Computer Measurement Group
A high level overview of the trade-offs involved in benchmarking, simulation and analytical modeling
-
Why Does Solid State Disk Lower CPI?
Computer Measurement Group
This paper discusses an anomaly discovered when we ran two CPU bound benchmarks (TPC-C, E) with rotating disks, then SSD. One would not normally expect a CPU bound benchmark to be improved by faster IO. A model of CPU cache residency was developed, and helped show that the improved latency of the SSDs preserved more of the workload's working set across IOs.
Other authorsSee publication -
Varying Memory Size with TPC-C, Performance and Resource Effects
The Eleventh Workshop on Computer Architecture Evaluation using Commercial Workloads (CAECW-11)
Using TPC‐C™ as an evaluation tool, we examine the effects of varying memory size on throughput, I/O rate, bus utilization and cache utilization. Unexpected relationships between memory size and resource utilization are revealed and quantified. The platform studied was an HP Integrity rx6600 with two dual‐core Itanium 2 CPUs, running at 1.6 GHz with 24 Mbytes cache memory per socket.
Other authorsSee publication -
Commercial Benchmarking Suites
ACM SIGMETRICS Performance Evaluation Review, Vol. 18 #3
A discussion of commercial benchmark suites.
-
Reflective memory instrumentation issues
ACM Press frontier series
In the BOOK: Performance instrumentation and visualization, New York, N.Y. : Redwood City, Calif. : ACM Press ; c1990.
DOI: 10.1145/100215.100267Other authors -
Scientific visualization of performance data: evaluation of DV-Draw.
Association for Computing Machinery - SIGMETRICS
SIGMETRICS Performance Evaluation Review 01/1990; 18:48-53. DOI: 10.1145/101320.101323
ABSTRACT This report discusses the attributes of the DV-Draw package from the VI Corporation of Amherst, Massachusetts. DV-Draw is a scientific visualization package which is part of a larger package called DataViews. The requirements for visualization software in performance evaluation are identified. The results of applying DV-Draw to animate the output of an architectural model were successful. -
Instrumentation for future parallel systems
Association For Computing Machinery
BOOK: Instrumentation for future parallel computing systems
Pages 111-120, ACM New York, NY, USA ©1989
ISBN:0-201-50390-5 doi>10.1145/75705.75711
-
Parallel algorithm development workbench
IEEE
DOI: 10.1109/SUPERC.1988.44630, Conference: Supercomputing '88. [Vol.1]. Proceedings.
ABSTRACT: The authors discuss the need for rapid prototyping during algorithm and system development. They propose a workbench environment to support an analysis effort to optimize algorithms, communications schemes, and CPU designs for parallel processing applications. This environment allows these facets of a parallel system to be investigated and optimized, either singly or jointly. The authors…DOI: 10.1109/SUPERC.1988.44630, Conference: Supercomputing '88. [Vol.1]. Proceedings.
ABSTRACT: The authors discuss the need for rapid prototyping during algorithm and system development. They propose a workbench environment to support an analysis effort to optimize algorithms, communications schemes, and CPU designs for parallel processing applications. This environment allows these facets of a parallel system to be investigated and optimized, either singly or jointly. The authors describe how an algorithm can be evaluated at various levels of abstraction and how the environment can support the design decomposition down to real code. The environment should allow algorithm and system designers with minimal modeling experience to experiment and optimize applications expending minimal effortOther authorsSee publication -
Branch Prediction Using Opcode Synonyms with Writeback
Computer Science Department, New Mexico Inst. of Mining and Technology
Discusses the modeling of branch prediction via trace driven modeling, the architecture of 4-state branch prediction on the Burroughs B4900, and the results.
-
Hybrid Instrumentation on the NCR Criterion
Computer Measurement Group
Fifth International Computer Measurement Group Conference, Dallas, TX, USA, December 4-7, 1979, Proceedings;
NCR's experience using the hybrid instrumentation approach, and the way this approach was incorporated into the VRX system architecture are presented. Of particular interest are the unique hybrid facilities provided for user tasks to enable them to communicate events to a monitor.Other authors -
Hidden-Line Plotting Program (Remark on Algorithm 420).
Communications of the ACM
p. 324, 1974-06-01, ACM New York, NY, USA, ISSN: 0001-0782 EISSN: 1557-7317 doi>10.1145/355616.364024
-
Theoretical prediction of airplane stability derivatives at subcritical speeds
NASA. 2/1973
The theoretical development and application is described of an analysis for predicting the major static and rotary stability derivatives for a complete airplane. The analysis utilizes potential flow theory to compute the surface flow fields and pressures on any configuration that can be synthesized from arbitrary lifting bodies and nonplanar thick lifting panels. The pressures are integrated to obtain section and total configuration loads and moments due side slip, angle of attack, pitching…
The theoretical development and application is described of an analysis for predicting the major static and rotary stability derivatives for a complete airplane. The analysis utilizes potential flow theory to compute the surface flow fields and pressures on any configuration that can be synthesized from arbitrary lifting bodies and nonplanar thick lifting panels. The pressures are integrated to obtain section and total configuration loads and moments due side slip, angle of attack, pitching motion, rolling motion, yawing motion, and control surface deflection. Subcritical compressibility is accounted for by means of the Gothert similarity rule.
Other authors
Patents
-
Fault tolerance for persistent main memory
Issued US 10,452,498
By adding fault tolerance functionality in the main memory access path that operates on small units of data, for example individual cache lines, at main memory access speeds, this type of data protection (e.g., data duplication or RAIDing) can be extended to direct memory access, such as persistent main memory, without awareness of the protection mechanism at a software application level. Current redundancy solutions only move data to disk when software is ready to commit that data. The system…
By adding fault tolerance functionality in the main memory access path that operates on small units of data, for example individual cache lines, at main memory access speeds, this type of data protection (e.g., data duplication or RAIDing) can be extended to direct memory access, such as persistent main memory, without awareness of the protection mechanism at a software application level. Current redundancy solutions only move data to disk when software is ready to commit that data. The system must then wait for this operation to complete, including the time required to write full RAID data to multiple devices before proceeding. In the present invention, storage commit can be completed faster by performing the RAID updates to persistent main memory as the individual cache line writes occur. Further, by spreading cache lines of a memory page across multiple persistent main memories, RAID operations for multiple cache lines can be processed in parallel. By processing the RAID operations in parallel, the time to complete transactions is reduced and demand on the system is balanced. The overall result is a faster, more efficient distribution of protected data across storage devices, from a power and data-movement perspective.
Other inventorsSee patent -
Object storage device with probabilistic data structure
Issued US 10,282,371
Systems and methods for utilizing probabilistic data structures to handle interrogations regarding whether or not objects might be stored in an object store of an object storage device are disclosed. More particularly, a controller of an object storage device includes control circuitry and a memory operative to store a probabilistic data structure. The probabilistic data structure has data related to the presence of data in an object store of the object storage device. The control circuitry is…
Systems and methods for utilizing probabilistic data structures to handle interrogations regarding whether or not objects might be stored in an object store of an object storage device are disclosed. More particularly, a controller of an object storage device includes control circuitry and a memory operative to store a probabilistic data structure. The probabilistic data structure has data related to the presence of data in an object store of the object storage device. The control circuitry is configured to receive an interrogation from a computing device for an object; utilize the probabilistic data structure to determine that the object is possibly stored in the object store or definitely not stored in the object store; and in response to a determination that the object is definitely not stored in the object store, respond to the interrogation that the object is not stored in the object store.
-
Controlling error propagation due to fault in computing node of a distributed computing system
Issued US 9,990,244
A technique includes receiving an alert indicator in a distributed computer system that includes a plurality of computing nodes coupled together by cluster interconnection fabric. The alert indicator indicates detection of a fault in a first computing node of the plurality of computing nodes. The technique indicates regulating communication between the first computing node and at least one of the other computing nodes in response to the alert indicator to contain error propagation due to the…
A technique includes receiving an alert indicator in a distributed computer system that includes a plurality of computing nodes coupled together by cluster interconnection fabric. The alert indicator indicates detection of a fault in a first computing node of the plurality of computing nodes. The technique indicates regulating communication between the first computing node and at least one of the other computing nodes in response to the alert indicator to contain error propagation due to the fault within the first computing node.
Other inventorsSee patent -
Cache and method for cache bypass functionality
Issued US 9,405,696
In cases of high loads, caches may become saturated by the number of incoming requests, adversely affecting latency. The cache controller is configured to receive memory requests to be satisfied by the cache memory or the main memory. In addition, the cache controller is configured to process cache activity information to cause at least one of the memory requests to bypass the cache memory.
Other inventorsSee patent -
Managing workload distribution among computer systems based on intersection of throughput and latency models
Issued US 9,389,919
A method of determining an estimated data throughput capacity for a computer system includes the steps of creating a first model of data throughput of a central processing subsystem in the computer system as a function of latency of a memory subsystem of the computer system; creating a second model of the latency in the memory subsystem as a function of bandwidth demand of the memory subsystem; and finding a point of intersection of the first and second models.
Other inventorsSee patent -
PERFORMING REFRESH OF A MEMORY DEVICE IN RESPONSE TO ACCESS OF DATA
Issued TWI525436B
Using stochastic sampling to inform refresh in order to reduce row-hammer impact. Also granted in China as CN104488031B
Other inventorsSee patent -
External cache operation based on clean castout messages
Issued US 9,189,424
A processor transmits clean castout messages indicating that a cache line is not dirty and is no longer being stored by a lowest level cache of the processor. An external cache receives the clean castout messages and manages cache lines based in part on the clean castout messages.
Other inventorsSee patent -
Address masking between users
Issued US 8,819,348
[Security, Timing attack] Provided is a method for uniquely masking addressing to the cache memory for each user, thereby reducing risk of a timing attack by one user on another user. The method comprises assigning a first mask value to the first user and a second mask value to the second user. The mask values are unique to one another. While executing a first instruction on behalf of the first user, the method comprises applying the first mask value to set selection bits in a memory address…
[Security, Timing attack] Provided is a method for uniquely masking addressing to the cache memory for each user, thereby reducing risk of a timing attack by one user on another user. The method comprises assigning a first mask value to the first user and a second mask value to the second user. The mask values are unique to one another. While executing a first instruction on behalf of the first user, the method comprises applying the first mask value to set selection bits in a memory address accessed by the first instruction. While executing a second instruction on behalf of the second user, the method comprises applying the second mask value to set selection bits in the memory address accessed by the second instruction. The result offers an additional level of security between users as well as reducing the occurrence of threads or processes contending for the same memory address
Other inventorsSee patent -
Determining whether a right to use memory modules in a reliability mode has been acquired
Issued US 8,812,915
Systems operating in double chip kill mode can pay a performance penalty relative to operation in single chip kill mode. A method is presented to allow a system to dynamically decide which mode to operate in, providing improved performance without impacting reliability.
Other inventorsSee patent -
Multiple processing elements
Issued US 8,782,466
[Power Management. Overclocking, Reliability] Computing systems can be manufactured to include multiple processing elements. For example, a system can include multiple microprocessors and/or multiple cores in a microprocessor. In addition, some systems can be modified so that one or more of the processing elements is run outside a normal, safe operating range of the processor. Running a processing element outside its normal operating range may be referred to as overclocking the processing…
[Power Management. Overclocking, Reliability] Computing systems can be manufactured to include multiple processing elements. For example, a system can include multiple microprocessors and/or multiple cores in a microprocessor. In addition, some systems can be modified so that one or more of the processing elements is run outside a normal, safe operating range of the processor. Running a processing element outside its normal operating range may be referred to as overclocking the processing element. According to an embodiment, the multiple processing elements of a computing system may be divided into three groups, each group containing one or more processing elements. The first group may be a normal operating range group in which the one or more processing elements are restricted to run within their normal operating range. The second group may be an overclocked operating range group in which the one or more processing elements are allowed to run outside their normal operating range. The third group may be a replacement group. A processing element in the replacement group may be inactive at first but may be activated when a processing element from the first or second group fails.
-
Bit ordering for communicating an address on a serial fabric
Issued US 8,688,890
[Optimize bit ordering to enable early set look-up] A method for handling a request of storage on a serial fabric comprising formatting an address for communication on a serial fabric into a plurality of fields including a field comprising at least one set selection bit and a field comprising at least one tag bit. The address is communicated on the serial fabric with the field comprising the at least one set selection bit communicated first.
Other inventors -
Cache and method for cache bypass functionality
Issued US 8,683,139
[External Cache BW Management] A cache is provided for operatively coupling a processor with a main memory. The cache includes a cache memory and a cache controller operatively coupled with the cache memory. The cache controller is configured to receive memory requests to be satisfied by the cache memory or the main memory. In addition, the cache controller is configured to process cache activity information to cause at least one of the memory requests to bypass the cache memory.
Other inventorsSee patent -
System for controlling I/O devices in a multi-partition computer system
Issued US 8,677,034
[Large Partitionable Server; IO Virtualization] An I/O control system for controlling I/O devices in a multi-partition computer system. The I/O control system includes an IOP partition containing an I/O processor cell with at least one CPU executing a control program, and a plurality of standard partitions, each including a cell comprising at least one CPU executing a control program, coupled, via shared memory, to the I/O processor cell. One or more of the standard partitions becomes an…
[Large Partitionable Server; IO Virtualization] An I/O control system for controlling I/O devices in a multi-partition computer system. The I/O control system includes an IOP partition containing an I/O processor cell with at least one CPU executing a control program, and a plurality of standard partitions, each including a cell comprising at least one CPU executing a control program, coupled, via shared memory, to the I/O processor cell. One or more of the standard partitions becomes an enrolled partition, in communication with the I/O processor cell, in response to requesting a connection to the IOP cell. After a partition is enrolled with the I/O processor cell, I/O requests directed to the I/O devices from the enrolled partition are distributed over shared I/O resources controlled by the I/O processor cell.
-
Datacenter workload evaluation
Issued US 8,670,971
[Cloud; QOS] A method is provided for evaluating workload consolidation on a computer located in a datacenter. The method comprises inflating a balloon workload on a first computer that simulates a consolidation workload of a workload originating on the first computer and a workload originating on a second computer. The method further comprises evaluating the quality of service on the first computer's workload during the inflating and transferring the workload originating on either the first or…
[Cloud; QOS] A method is provided for evaluating workload consolidation on a computer located in a datacenter. The method comprises inflating a balloon workload on a first computer that simulates a consolidation workload of a workload originating on the first computer and a workload originating on a second computer. The method further comprises evaluating the quality of service on the first computer's workload during the inflating and transferring the workload originating on either the first or the second computer to the other of the first or second computer if the evaluating the quality of service remains above a threshold.
Other inventorsSee patent -
Switch module based non-volatile memory in a server
Issued US 8,234,459
[Storage] A switch module having shared memory that is allocated to other blade servers. A memory controller partitions and enables access to partitions of the shared memory by requesting blade servers.
Other inventorsSee patent -
Method and program product for avoiding cache congestion by offsetting addresses while allocating memory
Issued US 7,237,084
A method of allocating memory operates to avoid overlapping hot spots in cache that can ordinarily cause cache thrashing. This method includes steps of determining a spacer size, reserving a spacer block of memory from a memory pool, and allocating memory at a location following the spacer block. In an alternative embodiment, the spacer size is determined randomly in a range of allowable spacer size. In other alternative embodiments, spacers are allocated based upon size of a previously…
A method of allocating memory operates to avoid overlapping hot spots in cache that can ordinarily cause cache thrashing. This method includes steps of determining a spacer size, reserving a spacer block of memory from a memory pool, and allocating memory at a location following the spacer block. In an alternative embodiment, the spacer size is determined randomly in a range of allowable spacer size. In other alternative embodiments, spacers are allocated based upon size of a previously allocated memory block
Other inventorsSee patent -
Analyzing effectiveness of a computer cache by estimating a hit rate based on applying a subset of real-time addresses to a model of the cache
Issued US 6,892,173
[Cache Memory; Modeling] A system and method for analyzing the effectiveness of a computer cache memory. A bus with memory transactions is monitored. A subset of addresses, along with associated transaction data, on the bus is captured and stored in a memory. The captured addresses are applied to a software model of a computer cache. The capture process is repeated multiple times, each time with a different subset of the address space. Statistical estimates of hit rate and other parameters of…
[Cache Memory; Modeling] A system and method for analyzing the effectiveness of a computer cache memory. A bus with memory transactions is monitored. A subset of addresses, along with associated transaction data, on the bus is captured and stored in a memory. The captured addresses are applied to a software model of a computer cache. The capture process is repeated multiple times, each time with a different subset of the address space. Statistical estimates of hit rate and other parameters of interest are computed based on the software model. Multiple cache configurations may be modeled for comparison of performance. Alternatively, a subset of addresses along with associated transaction data is sent to a hardware model of a cache. The contents of the hardware model are periodically dumped to memory or statistical data may be computed and placed in the memory. Statistical estimates of hit rate and other parameters of interest are computed based on the contents of the memory.
Other inventors -
Method and apparatus for clearing obstructions from computer system cooling fans
Issued US 6,532,151
[Computer Coughing] An obstruction is removed from a computer system cooling fan by manipulating fan rotation. When a fan obstruction is detected, the fan is stopped. If the obstruction is caused by an object that was drawn toward the fan intake, such as a sheet of paper, this operation may clear the obstruction. The fan may also be reversed to attempt to blow the obstruction clear of the fan. Thereafter, the fan is returned to normal operation and is monitored to determine whether the…
[Computer Coughing] An obstruction is removed from a computer system cooling fan by manipulating fan rotation. When a fan obstruction is detected, the fan is stopped. If the obstruction is caused by an object that was drawn toward the fan intake, such as a sheet of paper, this operation may clear the obstruction. The fan may also be reversed to attempt to blow the obstruction clear of the fan. Thereafter, the fan is returned to normal operation and is monitored to determine whether the obstruction was removed. If the fan is still obstructed, these steps can be repeated. If the attempts to clear the obstruction are unsuccessful, then the computer system operator or management software can be signaled.
Other inventorsSee patent -
Systems and methods for increasing the difficulty of data sniffing
Issued US 7,370,209
[Security] Disclosed are systems and methods for increasing the difficulty of data sniffing. In one embodiment, a system and a method pertain to presenting information to a user via an output device, the information corresponding to characters available for identification as part of sensitive information to be entered by the user, receiving from the user via an input device identification of information other than the explicit sensitive information, the received information being indicative of…
[Security] Disclosed are systems and methods for increasing the difficulty of data sniffing. In one embodiment, a system and a method pertain to presenting information to a user via an output device, the information corresponding to characters available for identification as part of sensitive information to be entered by the user, receiving from the user via an input device identification of information other than the explicit sensitive information, the received information being indicative of the sensitive information, such that the sensitive information cannot be captured directly from the user identification through data sniffing, and interpreting the identified information to determine the sensitive information.
Other inventors -
Coherency protocol for computer cache
Issued US 6,360,301
[Coherency Filters and Caches] A lower level cache detects when a line of memory has been evicted from a higher level cache. The cache coherency protocol for the lower level cache places the line into a special state. If a line in the special state is evicted from the lower level cache, the lower level cache knows that the line is not cached at a higher level, and therefore a back-invalidate transaction is not needed. Reducing the number of back-invalidate transactions improves the performance…
[Coherency Filters and Caches] A lower level cache detects when a line of memory has been evicted from a higher level cache. The cache coherency protocol for the lower level cache places the line into a special state. If a line in the special state is evicted from the lower level cache, the lower level cache knows that the line is not cached at a higher level, and therefore a back-invalidate transaction is not needed. Reducing the number of back-invalidate transactions improves the performance of the system.
Other inventorsSee patent -
Computer cache memory with classes and dynamic selection of replacement algorithms
Issued US 6,223,256
A cache memory system for a computer. Target entries for the cache memory include a class attribute. The cache may use a different replacement algorithm for each possible class attribute value. The cache may be partitioned into sections based on class attributes. Class attributes may indicate a relative likelihood of future use. Alternatively, class attributes may be used for locking. In one embodiment, each cache section is dedicated to one corresponding class. In alternative embodiments…
A cache memory system for a computer. Target entries for the cache memory include a class attribute. The cache may use a different replacement algorithm for each possible class attribute value. The cache may be partitioned into sections based on class attributes. Class attributes may indicate a relative likelihood of future use. Alternatively, class attributes may be used for locking. In one embodiment, each cache section is dedicated to one corresponding class. In alternative embodiments, cache classes are ranked in a hierarchy, and target entries having higher ranked attributes may be entered into cache sections corresponding to lower ranked attributes. With each of the embodiments, entries with a low likelihood of future use or low temporal locality are less likely to flush entries from the cache that have a higher likelihood of future use.
-
Method and apparatus for gathering three dimensional data with a digital imaging system
Issued US 6,950,135
A digital image capture device including circuits capable of measuring the distance between the image capture device and an imaged object allows the capture of three-dimensional data of the surface of the object facing the image capture device. The distance data is obtained by the addition of a flash unit, and very high resolution timers to multiple pixels within the image capture device to measure the time required for the flash to reflect from the object. Since the speed of light is constant,…
A digital image capture device including circuits capable of measuring the distance between the image capture device and an imaged object allows the capture of three-dimensional data of the surface of the object facing the image capture device. The distance data is obtained by the addition of a flash unit, and very high resolution timers to multiple pixels within the image capture device to measure the time required for the flash to reflect from the object. Since the speed of light is constant, the distance from the flash to the object to the image capture device may be calculated from the delay for the light from the flash to reach the device. Multiple pixels may be used to construct a three-dimensional model of the surface of the object facing the image capture device. Multiple images including distance data may be taken in order to generate a complete three-dimensional model of the surface of the object.
Other inventorsSee patent -
Method and apparatus for translating virtual path file access operations to physical file path access
Issued US 6,381,615
A method and apparatus virtualizes file access operations and other I/O operations in operating systems by performing string substitutions upon a file paths or other resource identifiers to convert the virtual destination of an I/O operation to a physical destination. A virtual file system translation driver is interposed between a file system driver and applications and system utilities. The virtual file system translation driver receives file access requests from the applications and system…
A method and apparatus virtualizes file access operations and other I/O operations in operating systems by performing string substitutions upon a file paths or other resource identifiers to convert the virtual destination of an I/O operation to a physical destination. A virtual file system translation driver is interposed between a file system driver and applications and system utilities. The virtual file system translation driver receives file access requests from the applications and system utilities, and translates the file path to virtualize the file system. In a first embodiment, the file system is partially virtualized and a user can see both the virtual file paths and the physical file paths. In second and third embodiments, the file system is completely virtualized from the point of view of the applications and system utilities. In the second embodiment, a user may start with a physical file system, and virtualize the file system by installing the virtual file system translation driver. When the driver is initially installed, all virtual file paths will be considered to translate to identically named physical file paths by default. In the third embodiment, virtual translations are automatically generated for all file paths when files and directories are created, and virtual file paths may bear limited, or no resemblance to physical file paths.
Other inventorsSee patent -
Method and apparatus for passing messages using a fault tolerant storage system
Issued US 6,889,244
A method and apparatus pass messages between server and client applications using a fault tolerant storage system (FTSS). The interconnection fabric that couples the FTSS to the computer systems that host the client and server applications may also be used to carry messages. A networked system capable of hosting a distributed application includes a plurality of computer systems coupled to an FTSS via an FTSS interconnection fabric. The FTSS not only processes file-related I/O transactions, but…
A method and apparatus pass messages between server and client applications using a fault tolerant storage system (FTSS). The interconnection fabric that couples the FTSS to the computer systems that host the client and server applications may also be used to carry messages. A networked system capable of hosting a distributed application includes a plurality of computer systems coupled to an FTSS via an FTSS interconnection fabric. The FTSS not only processes file-related I/O transactions, but also includes several message agents to facilitate message transfer in a reliable and fault tolerant manner. The message agents include a conversational communication agent, an event-based communication agent, a queue-based communication agent, a request/reply communication agent, and an unsolicited communication agent. The highly reliable and fault tolerant nature of the FTSS ensures that the FTSS can guarantee delivery of a message transmitted from a sending computer system to a destination computer system. As soon as a message is received by the FTSS from a sending computer system, the message is committed to a nonvolatile fault tolerant write cache. Thereafter, the message is written to a redundant array of independent disks (RAID) of the FTSS, and processed by one of the message agents.
Other inventorsSee patent -
Dynamic trace driven object code optimizer
Issued US 5915114
[Dynamic Code Optimizaton] A dynamic trace-driven object code optimizer provides for dynamic, real-time optimization of executable object code. The dynamic trace-driven object code optimizer bases the real-time optimization of executable object code on data gathered from execution traces collected in real-time. The executable code is then modified in real-time to generate optimized object code that is able to run more quickly and efficiently on the current system.
Other inventorsSee patent -
Extended address generating apparatus and method
Issued US 4,453,212
Address generating apparatus which uses narrow data paths for generating a wide logical address and which also provides for programs to access very large shared data structures outside their normally available addressing range and over an extended range of addresses. Selective indexed addressing is employed for providing index data which is also used for deriving variable dimension override data. During address generation, selected index data is added to a displacement provided by an…
Address generating apparatus which uses narrow data paths for generating a wide logical address and which also provides for programs to access very large shared data structures outside their normally available addressing range and over an extended range of addresses. Selective indexed addressing is employed for providing index data which is also used for deriving variable dimension override data. During address generation, selected index data is added to a displacement provided by an instruction for deriving a dimension override value as well as an offset. The derived dimension override value is used to selectively access an address locating entry in a table of entries corresponding to the applicable program. The resulting accessed address locating entry is in turn used to determine the particular portion of memory against which the offset is to be applied
Other inventors -
Address generating apparatus and method
Issued US 4,432,053
Address generating apparatus which uses narrow data paths for generating a wide logical address and which also provides for programs to access very large shared data structures outside their normally available addressing range. Selective indexed addressing is employed for providing both index data and variable dimension override data. During address generation, selected index data is used in conjunction with a displacement provided by an instruction for determining an offset. Dimension override…
Address generating apparatus which uses narrow data paths for generating a wide logical address and which also provides for programs to access very large shared data structures outside their normally available addressing range. Selective indexed addressing is employed for providing both index data and variable dimension override data. During address generation, selected index data is used in conjunction with a displacement provided by an instruction for determining an offset. Dimension override data accompanying the selected index data is used to selectively access an address locating entry in a table of entries corresponding to the applicable program. The resulting accessed address locating entry is in turn used to determine the particular portion of memory against which the offset is to be applied.
Other inventors -
Fault tolerant storage system having an interconnection fabric that also carries network traffic
US 6,938,071
A networked system includes a fault tolerant storage system (FTSS) having an interconnection fabric that also carries network traffic. A plurality of servers are coupled to an FTSS via an FTSS interconnection fabric. As soon as a packet is received from a sending node, the packet is committed to reliable, persistent, and fault-tolerant storage media within the FTSS, and will not be lost. If the destination node is one of the servers coupled to the FTSS, the FTSS can send an acknowledgment to…
A networked system includes a fault tolerant storage system (FTSS) having an interconnection fabric that also carries network traffic. A plurality of servers are coupled to an FTSS via an FTSS interconnection fabric. As soon as a packet is received from a sending node, the packet is committed to reliable, persistent, and fault-tolerant storage media within the FTSS, and will not be lost. If the destination node is one of the servers coupled to the FTSS, the FTSS can send an acknowledgment to the sending node guaranteeing delivery to the destination node, even though the destination node has not yet received the packet. The packet is then transmitted to the receiving node, with the receiving node sending an acknowledgment to the FTSS when the packet has been received successfully. At this point, the FTSS can remove the packet from the storage media, or retain the packet in a packet queue for a period of time to allow an application to reconstruct a network dialog in the event of an error or other type of failure. The present invention also allows packets be routed between servers coupled to the FTSS and nodes coupled to an external network.
Other inventorsSee patent -
Method and apparatus for virtualizing file access operations and other I/O operations
US 6,195,650
A method and apparatus virtualizes file access operations and other I/O operations in operating systems by performing string substitutions upon a file paths or other resource identifiers to convert the virtual destination of an I/O operation to a physical destination. In accordance with the present invention, a virtual file system translation driver is interposed between a file system driver and applications and system utilities. The virtual file system translation driver receives file access…
A method and apparatus virtualizes file access operations and other I/O operations in operating systems by performing string substitutions upon a file paths or other resource identifiers to convert the virtual destination of an I/O operation to a physical destination. In accordance with the present invention, a virtual file system translation driver is interposed between a file system driver and applications and system utilities. The virtual file system translation driver receives file access requests from the applications and system utilities, and translates the file path to virtualize the file system. In a first embodiment, the file system is partially virtualized and a user can see both the virtual file paths and the physical file paths. In second and third embodiments, the file system is completely virtualized from the point of view of the applications and system utilities. In the second embodiment, a user may start with a physical file system, and virtualize the file system by installing the virtual file system translation driver. When the driver is initially installed, all virtual file paths will be considered to translate to identically named physical file paths by default. In the third embodiment, virtual translations are automatically generated for all file paths when files and directories are created, and virtual file paths may bear limited, or no resemblance to physical file paths.
Other inventorsSee patent -
Split mode addressing a persistent memory
US 11,221,967
A system and method for addressing split modes of persistent memory are described herein. The system includes a non-volatile memory comprising regions of memory, each region comprising a range of memory address spaces. The system also includes a memory controller (MC) to control access to the non-volatile memory. The system further includes a device to track a mode of each region of memory and to define the mode of each region of memory. The mode is a functional use model.
Other inventorsSee patent -
Transactional cache memory system
US 8,924,653
A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified…
A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.
Other inventorsSee patent
Languages
-
English
Native or bilingual proficiency
-
German
Elementary proficiency
Organizations
-
Association for Computing Machinery
Past: Editor-in-Chief ACM Sigmetrics Performance Evaluation Review - Ten years
-
Computer Measurement Group
Current: Board Member, Rocky Mountain CMG
Recommendations received
5 people have recommended Blaine
Join now to viewMore activity by Blaine
-
50 LIFE RULES THAT ARE GOLDEN. 1. Always stand up while shaking hands with someone. 2. Never make the first offer in a negotiation. 3. Keep their…
50 LIFE RULES THAT ARE GOLDEN. 1. Always stand up while shaking hands with someone. 2. Never make the first offer in a negotiation. 3. Keep their…
Liked by Blaine Gaither
-
I'm incredibly proud of our RISC-V International progress, community, and global adoption across workloads, industries, and geographies. As I resign…
I'm incredibly proud of our RISC-V International progress, community, and global adoption across workloads, industries, and geographies. As I resign…
Liked by Blaine Gaither
-
I’m beyond thrilled to see the MLPerf Client benchmark released! From announcement to release in under a year is a testament to the teamwork here…
I’m beyond thrilled to see the MLPerf Client benchmark released! From announcement to release in under a year is a testament to the teamwork here…
Liked by Blaine Gaither
-
I'm thrilled to announce that I'll be presenting at the University of Wisconsin's Information and Technology Leadership Conference! 🎉 I’ll be…
I'm thrilled to announce that I'll be presenting at the University of Wisconsin's Information and Technology Leadership Conference! 🎉 I’ll be…
Liked by Blaine Gaither
-
A little startup inspirational talk today.... about getting your shot! I am here in Philly for the day for a talk with. Comcast, NBCUniversal,…
A little startup inspirational talk today.... about getting your shot! I am here in Philly for the day for a talk with. Comcast, NBCUniversal,…
Liked by Blaine Gaither
-
“If you don’t stand for something, you’ll fall for anything” – Alexander Hamilton This time of year always stimulates reflection – I thought I’d…
“If you don’t stand for something, you’ll fall for anything” – Alexander Hamilton This time of year always stimulates reflection – I thought I’d…
Liked by Blaine Gaither
-
“We looked at leveraging AI in our SaaS product and decided not to as we don’t see much use or promise in it” --- said NO ONE in 2024 Interest in AI…
“We looked at leveraging AI in our SaaS product and decided not to as we don’t see much use or promise in it” --- said NO ONE in 2024 Interest in AI…
Liked by Blaine Gaither
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More