From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 17, 2008 Subject: Has anyone got a rule of thumb for calculation data center sizing. Blog: Computer NetworkingWhen we were doing our ha/cmp product ... we had coined the terms disaster survivability and geographic survivability ... lots of old posts
as hardware and software became more reliable, the major remaining
failure/outage causes were becoming environmental ... which required
countermeasures like geographic separation. lots of old posts
specifically related to continuous availability
https://www.garlic.com/~lynn/submain.html#available
part of ha/cmp scale-up was physical packaging more computing into
smaller footprint. Recent answer discussing BLADE/GRID theme
increasing amount of computing in smaller & smaller footprint.
http://www.linkedin.com/answers/technology/information-technology/information-storage/TCH_ITS_IST/217659-23436977
Some number of datacenters can run multiple billions of dollars in a single location ... but if there is significant business dependency on dataprocessing availability ... the trend is to separating the operation into multiple different locations.
For old historical reference there was a 1970 datacenter that was characterized as being a $2.5billion "windfall" for IBM (in 1970 dollars).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 17, 2008 Subject: Do you belive Information Security Risk Assessment has shortcoming like Blog: Information Securityone of the things that we worked on was what we called parameterised risk management. it basically did things link in-depth threat and vulnerability analysis and kept the individual threat/vulnerability of the individual components. parameterised risk management including the concept of threats/vulnerabilities could change over time ... i.e. as technology advances are made the threat/vulnerability of specific components can change.
one of the things that parameterised risk management allowed for was a large variety of different technologies in use across the infrastructure ... and the possibility that the integrity of any specific component can be affected in real time (in order to support real-time changes, original threat/vulnerability and integrity characteristics/profile have to be maintain so that changes can be mapped in real time, and include the sense of more semantic meaning as opposed to purely numeric)... which, in turn, might require real time changes in operations (possibly additional compensating procedures).
parameterised risk management was some of the work including in the
aads patent portfolio reference here
https://www.garlic.com/~lynn/x959.html#aads
parameterised risk management was some of the work including in the
aads patent portfolio reference here
https://www.garlic.com/~lynn/x959.html#aads
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sat, 17 May 2008 19:19:43 -0400Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
SCI cache consistency directory mechanism was written up in the standard ... from approx. same period. Standard SCI was 64-way, convex used it with 64 two-processor pa-risc boards (exemplar) and sequent used it with 64 four-processor intel boards (numa-q).
SCI reference:
http://www.scizzl.com/Perspectives.html
above mentions SGI using SCI w/o mentioning SCI
Silicon Graphics Makes the Arguments for Using SCI!
http://www.scizzl.com/SGIarguesForSCI.html
for other drift, acm article from '97 ...
A Hierarchical Memory Directory Scheme Via Extending SCI for Large-Scale
Multiprocessors
http://portal.acm.org/citation.cfm?id=523549.822844
abstract for above:
SCI (Scalable Coherent Interface) is a pointer-based coherent directory
scheme for large-scale multiprocessors. Large message latency is one of
the problems with SCI because of its linked list structure: the
searching latency can grow as a linear order of the number of
processors.In this paper, we focus on a hierarchical architecture to
propose a new scheme - EST(Extending SCI-Tree), which may reduce the
message traffic and also take the advantages of the topology
property. Simulation results show that the EST scheme is effective in
reducing message latency and communication cost when compared with other
schemes.
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sun, 18 May 2008 00:16:53 -0400Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
alewife and dash were university research projects ... SCI was passed as standard with extensive write up in the standard and used by several vendors in shipped computer products.
i've commented before that at one point, we were asked if we would be interested in heading up effort to commercialize (SUN's object-oriented) SPRING operating system, turning it out as product (this was in the era when object-oriented was all the rage ... apple was doing PINK). It would have been done in conjunction with product that had possibly thousands of sparc processors interconnected with SCI.
I've got early writeup of the SCI directory cache consistency protocol from before standard was passed ... and just doing some picking around on the SCI website for more information ...
also from above:
At long last the IEC/IEEE publications deadlock has been resolved, and
the corrected SCI spec has been published by the IEC as "ISO/IEC
Standard 13961 IEEE". Unfortunately, the updated diskette was not
incorporated. However, the updated C code is online, at sciCode.c text
file (1114K). This release does not have the separate index file that
shipped on the original diskette, because with the passage of time we
lost the right of access to the particular software that generated that
index. (People change employers.)
Unfortunately, the IEEE completely bungled the update, reprinting the
old uncorrected standard with a new cover and a few cosmetic
changes. Until this has been corrected, the IEEE spec should be avoided.
... snip ...
i've commented before in that period was that a lot of hippi standard work was backed by LANL, fcs standard work was backed by LLNL and SCI work came out of SLAC ... all furthering/contributing to commoditizing various aspects of high-performance computing.
the scizzl.com web site also lists SCI book
https://www.amazon.com/exec/obidos/ASIN/3540666966/qid=956090056/sr=1-1/103-0276094-5848643
from above:
Scalable Coherent Interface (SCI) is an innovative interconnect standard
(ANSI/IEEE Std 1596-1992) addressing the high-performance computing and
networking domain. This book describes in depth one specific application
of SCI: its use as a high-speed interconnection network (often called a
system area network, SAN) for compute clusters built from commodity
workstation nodes. The editors and authors, coming from both academia
and industry, have been instrumental in the SCI standardization process,
the development and deployment of SCI adapter cards, switches, fully
integrated clusters, and software systems, and are closely involved in
various research projects on this important interconnect. This
thoroughly cross-reviewed state-of-the-art survey covers the complete
hardware/software spectrum of SCI clusters, from the major concepts of
SCI, through SCI hardware, networking, and low-level software issues,
various programming models and environments, up to tools and application
experiences.
... snip ...
besides sci defining directory protocol for memory/cache consistency, it also defined a number of other uses.
Comparison of ATM, FibreChannel, HIPPI, Serialbus, SerialPlus SCI/LAMP
http://www.scizzl.com/SCIvsEtc.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 18, 2008 Subject: A Merit based system of reward -Does anybody ( or any executive) really want to be judged on merit? Blog: Organizational Developmentthis business school article mentions that there are about 1000 CEOs that are responsible for about 80% of the current financial mess (and it would go a long way to fixing the mess if the gov. could figure out how for them to loose their job)
while this article points out there was $137 billion in bonus paid out
in the period that the current financial mess was being created (in
large to those creating the current financial mess)
http://www.businessweek.com/#missing-article
For slight drift ... the current financial mess heavily involved toxic CDOs which were also used two decades ago during the S&L crisis to hide the underlying value ... and I've used the analogy about toxic CDOs being used to obfuscate the "observe" in Boyd's OODA-loop.
This article includes mention of SECDEF recently honoring Boyd (to the
horror of the air force)
http://www.time.com/time/nation/article/0,8599,1733747,00.htm
Now one of the things that Boyd use to tell young officers was that
they had to choose between doing and being. Being led to all
sorts of rewards and positions of honor ... while if you were
effective at doing ... frequently the reward would be a kick in
the stomach (a little cleaned up in the following):
http://www.d-n-i.net/dni/john-r-boyd/to-be-or-to-do/
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sun, 18 May 2008 22:16:15 -0700 (PDT)On May 18, 10:09=A0am, Quadibloc <jsav...@ecn.ab.ca> wrote:
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Subject: Removing the Big Kernel Lock Newsgroups: alt.folklore.computers Date: Mon, 19 May 2008 07:00:37 -0700 (PDT)Removing the Big Kernel Lock
from above:
"There is a big discussion going on over removing a bit of
non-preemptable code from the Linux kernel. 'As some of the latency
junkies on lkml already know, commit 8e3e076 in v2.6.26-rc2 removed
the preemptable BKL feature and made the Big Kernel Lock a spinlock
and thus turned it into non-preemptable code again. "This commit
returned the BKL code to the 2.6.7 state of affairs in essence," began
Ingo Molnar. He noted that this had a very negative effect on the real
time kernel efforts, adding that Linux creator Linus Torvalds
indicated the only acceptable way forward was to completely remove the
BKL.'"
... snip ...
when charlie was doing fine grain locking work on cp67 smp
support at the science center in the late 60s and early 70s
https://www.garlic.com/~lynn/subtopic.html#545tech
one of the things he invented was the compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp
BKL or global system/kernel lock was (really) the state of the art at that time.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Annoying Processor Pricing Newsgroups: alt.folklore.computers,comp.arch Date: Tue, 20 May 2008 12:52:06 -0400"Sarr J. Blumson" <sarr.blumson@alum.dartmouth.org> writes:
telco infrastructures had large fixed infrastructre, staff, and costs ... with the costs being recovered by usage charges. deployment of large amount of (dark) fiber in the early 80s ... significantly increased the capacity ... but there was a huge chicken/egg situation.
huge increases in usage wouldn't come w/o huge decreases in usage charges. huge increases in usage wouldn't come w/o whole new generation of bandwidth hungry applications ... but those wouldn't be invented without demand, and there wouldn't be demand w/o huge usage charge decreases.
just dropping the usage charges ... would still take maybe a decade for the demand to evolve along with new generation of bandwidth hungry applications (a decade where infrastructure might otherwise operate at enormous losses ... because of relatively fixed costs).
one of the scenarios was the educational/nsf infrastructure. provide significant resources for "sandbox" operation with limitations that the contributed resources weren't usurp standard commercial operation. this would provide "incubator" environment for development/evolution of bandwidth hungry applications w/o any significant impact on regular commercial revenue.
much of the links in that period were 56kbits ... but we were running
T1 and higher speed links internally in the hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt
recent hsdt topic drift posts mentioning encryption requirement:
https://www.garlic.com/~lynn/2008.html#79 Rotary phones
https://www.garlic.com/~lynn/2008e.html#6 1998 vs. 2008: A Tech Retrospective
https://www.garlic.com/~lynn/2008h.html#87 New test attempt
and we feel that strongly influenced nsfnet backbone rfp to specify
t1 ... some old email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
however, eventually we weren't allowed to bid on nsfnet ... director of
nsf thot that writing a letter to the company 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) (including statements that
what we already had running was at least five yrs ahead of all nsfnet
backbone bids) ... but that just aggravated the internal politics.
misc. past nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
the winning bid actually put in 440kbit liinks (not t1) ... but somewhat to meet the letter of the bid ... they put in T1 trunks and used telco-type multiplexors to operate multiple 440kbit links over the T1 trunks (we made some disparaging remarks that the T1 trunks may have in turn, actually been multiplexed at some point over T5 trunks ... which then they could make claims about nsfnet backbone being T5 network???).
however, we've also commented that possibly resources, 4-5 times the amount of the nsfnet backbone bid was actually used (effectively improving the incubator atmosphere encouraging the development and use of bandwidth hungry applications ... w/o impacting existing commerical usage-based revenue).
there was even more significant resources contributed to many of the
local networks (that were interconnected by the nsfnet backbone) as
well to the bitnet & earn academic networks
https://www.garlic.com/~lynn/subnetwork.html#bitnet
a corresponding (processor specific) charging in the 60s, was the cpu-meter used for charging ... most of the machines were leased/rented and charging based on usage.
this impacting the migration to 7x24 commercial time-sharing use. in the
virtual machine based offerings (cp67 and then morphing into vm370)
https://www.garlic.com/~lynn/submain.html#timeshare
normal 1st shift usage charges was enough to cover the fixed operating
costs. a problem was frequently that offshift usage (revenue) wouldn't
cover the corresponding usage charges (including vendor cpu-meter based
charges). the 360/370 cpu-meter would run whenever the processor was
active and/or when there was active channel i/o operations (and would
continue to "coast" for 400millseconds after all activity ceased). A
challenge was to significantly reduce off-shift and weekend costs
... while still leaving the system available for use ... including
remote & home access ... i.e. I've had home dialup access starting in
mar70 into the science center service
https://www.garlic.com/~lynn/subtopic.html#545tech
one of the issues was migration to "lights-out" operation ... i.e. the machine being able to operate/run w/o actually having a human present to perform operations (more & more automated operator).
the other was "channel i/o programs" that would be sufficiently active to allow/accept incoming characters ... but otherwise sufficiently idle that the cpu-meter to come to a stop (i.e. being "prepared" to accept incoming characters).
a corresponding phenomena was that off-shift charging (at various levels) has frequently been a fraction of 1st shift usage. The issue is that a lot of the infrastructure costs are fixed regardless of the time-of-day ... and there has tended to be heavy provisioning to handle (peak) 1st shift operation (in the past, cost of peak computer usage provisioning was much larger because computer hardware was significantly more expensive). off-shift charging policies were frequently focused at attempting to migrate usage in order to utilize otherwise idle capacity.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pro- foreign key propaganda? Newsgroups: comp.databases.theory,alt.folklore.computers Date: Tue, 20 May 2008 15:37:52 -0400paul c <toledobysea@ac.ooyah> writes:
Also the underlying disk technology had made a trade-off between
relatively abundant i/o capability and the limited availability of
real/electronic storage in the "CKD" ... count-key-data architecture
... misc. past posts
https://www.garlic.com/~lynn/submain.html#dasd
... it was possible to create i/o requests that performed extended searches for data &/or key pattern matches on disk ... w/o having to know specific location for some piece of information. This was used extensively in various kinds of directories (kept on disk w/o needing to use extremely scarce real storage).
However the 60s era databases tended to have direct record pointers exposed in database records (independent of the multi-track "searching" operations which would tell the disk to try and find the record).
I've posted several times before about the discussions between the IMS
group in STL/bldg90 and the (original relational/SQL) System/R group in
SJR/bldg28 (where Codd was located). Misc. past posts mentioning
System/R
https://www.garlic.com/~lynn/submain.html#systemr
The IMS group pointed out that the index implementation in relational (which contributed to eliminating exposed record pointers as part of the database schema) typically doubled the physical disk space ... vis-a-vis IMS ... and greatly increased the number of physical disk i/os (as part of processing the index to find actual record location). The relational counter-argument was that eliminating the exposed record pointers as part of the database schema significantly reduced the administrative and management costs for large complex databases.
Going into the 80s, there was significant increases in availability of electronic stoarge and also significant decline in computer hardware costs (especially compared to increasing people costs). This shift helped with the uptake of relational ... the reduction in disk costs (and significant increase in bytes/disk) eliminated much of the argument about the disk space requirements issue for relational index. The increases in sizes of computer memories (and reduction in cost) allowed for significant amounts of relational indexes to be cached in real storage (mitigating the significant increase in i/os that had been needed to process the indexes). The significant reduction in administrative and management for relational (vis-a-vis IMS) was not only a cost issue but also became a skills availability issue (it became much easier to obtain and justify skills for relational deployment).
The database direct record pointers as well as the extensive "searching" capability (both from the 60s) could be considered a result of the extremely constrained amount of available system memory/storage.
The shift in relative amounts of system memory/storage vis-a-vis i/o
capacity actually started by the mid-70s. in the early 80s, I was making
statements that relative disk thruput had declined by better than an
order of magnitude (ten times) over a period of 10-15 yrs (i.e. memory
and cpu had increased by a factor of 40-50, disk thruput had only
increase by factor of 3-5). This got me into some amount of trouble with
the executives that ran the disk division. at one point they assigned
their performance group to refute my statements. After a couple of
weeks ... they came back and observed that I had actually somewhat
understated the technology shift. on the other hand ... they did let me
periodically play disk engineer in the disk engineering and product test
labs ... misc. past posts
https://www.garlic.com/~lynn/subtopic.html#disk
Other purely historical topic drift ... the first relational product was for Multics ... from the 5th flr at 545 tech sq.
The science center was on the 4th flr of 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech
which had come up with the original virtual machine operating systems ... thru various generations cp40, cp67, vm370, etc. it was also where (gml) markup language was invented in 1969 (subsequently morphing in various sgml, html, xml, etc).
And all the System/R development was done on vm370 virtual machine operating system at SJR.
The march in the increases in system real storage has seen another transition. Relational implementations during the 70s and 80s were mostly oriented towards primary location of information was on disk and copies were kept cached in real storage. However, in the 90s there started appearing implementations that assumed the whole database was in real storage and disks were purely for transaction integrity.
These claimed ten times the performance of the earlier generation of "caching" oriented implementations (even on the same hardware where the caching oriented implementations had their complete database also resident in real storage).
These high-performance relational databases saw some uptake in the telco and cellphone markets ... used for large number of call detail/charge records. There was also some speculation in this period that telcos might be able to move into the payment transaction market ... leveraging their highly efficient call transaction database implementations to first gain a foothold with "micro-payments" and then moving into the rest of the payment transaction market (being able to process significantly larger volume of transactions at a significantly lower cost). The big inhibitor for telcos pulling this off seems to be figuring out how to deal with the financial liability issues with regard to handling payments.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Obfuscation was: Definition of file spec in commands Newsgroups: alt.folklore.computers Date: Tue, 20 May 2008 15:58:45 -0400Walter Bushell <proto@xxx.com> writes:
a couple decades later there were some jokes about my 4shift week,
1st shift in sjr/bldg. 28 ... things like System/R
https://www.garlic.com/~lynn/submain.html#systemr
2nd shift in bldgs 14&15 (disk engineering and product test lab)
https://www.garlic.com/~lynn/subtopic.html#disk
frequent 3rd shift in bldg90/stl on various activities
and 4th shift (weekends) up at HONE (virtual machine based time-sharing
service providing online sales & marketing support)
https://www.garlic.com/~lynn/subtopic.html#hone
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Different Implementations of VLIW . Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 20 May 2008 18:45:02 -0400"Phil Weldon" <not.disclosed@example.com> writes:
the software paradigm to support it was sort of like an electronic paging disk ... except, instead of asynchronous i/o operations ... it had synchronous move instructions (the claim being that while the instruction took some amount of time ... it was way less than traditional asynchronous i/o interrupt handling pathlengths).
it was a "fast", wide bus between expanded store (placing the extended store futher away with longer latency) and processor memory that moved 4k bytes at a time ... could also think of processor memory as software controlled (store-in) cache with 4kbyte cache lines ... although the amount of expanded store tended to be about the same as regular processor memory.
the expanded store bus came in handy when they went to add hippi i/o support to 3090. the standard channel interfaces wouldn't handle the i/o rate. they cut into the side of the expanded store bus to add hippi. however, the mainframe interface was still 4k move instruction ... so hippi i/o programming was done with a kind of peek/poke paradigm using 4k move to/from instructions to reserved expanded store addresses.
later generations, memory densities and packaging technology eliminated the need for expanded store ... however, there continued to be LPAR configuration support for emulated expanded store (using standard memory) ... apparently because of various legacy software implementation considerations.
LPAR has been evoluation of PR/SM introduced in 3090s ... somewhat in response to Amdahl's hypervisor. LPARs ... or Logical PARtitions implement a significant subset of virtual machine capability directly in the "hardware" (doesn't require a *software* virtual machine operating system).
... i don't have recollection of costs ... however 3090 archives
web page:
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
the above mentions the (two) 3092 processor controller ... service
processor ... which were really a pair of 4361s running a modified
version of vm370 release 6 ... recent post discussion 3090 of
(4361) service processors
https://www.garlic.com/~lynn/2008h.html#80 Microsoft versus Digital Equipment Corporation
the 3090 archive also mentions that the (4361) 3092 processor controller required two 3370 Model A2 disks ... and access to 3420 tape drives (for read/write files).
for other memory topic drift x-over, post from today in c.d.t about
rdbms
https://www.garlic.com/~lynn/2008i.html#8 pro- foreign key propaganda?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Different Implementations of VLIW . Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 20 May 2008 18:57:20 -0400Anne & Lynn Wheeler <lynn@garlic.com> writes:
for a different 3090 "capacity planning" issue ... was number of channels (w/o regard to peak transfer rate and not being able to support hippi).
3090s were built with large modules ... and had been profiled to have balanced system thruput with a specific number of i/o channels. however, fairly late in the development cycle ... it was "discovered" that the new disk controller (3880) had significantly higher protocol processing overhead ... significantly increasing channel busy time (even tho the data transfer rate had increased to 3mbytes/sec, the disk control processor handling i/o commands was quite slow).
The revised system thruput profile (using the actual 3880 controller overhead channel busy numbers) required a significant increase in the number of channels (in order to meet aggregate thruput objectives). This required that 3090 manufacturing required an extra module ... which noticeably increased the 3090 manufacturing costs.
There was joke that the incremental manufacturing costs for each 3090, should be charged off against the disk business unit's bottom line ... rather than the processor business unit's bottom line.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pro- foreign key propaganda? Newsgroups: comp.databases.theory,alt.folklore.computers Date: Tue, 20 May 2008 22:37:44 -0400paul c <toledobysea@ac.ooyah> writes:
BDAM ... basic direct access method. basically had 32bit record no/ptr
Misc. past posts mentioning BDAM (and/or CICS ... an online transaction
monitor originating in the same era and frequently deployed with
applications that used BDAM files)
https://www.garlic.com/~lynn/submain.html#bdam
Online medical library was developed using bdam in the 60s and was still in extensive world-wide use 30 years later ... still being the largest online search facility in the world until being eclipsed by some of the popular internet search engines sometime in the 90s.
One of the processes was that medical knowledge articles were indexed in a large number of different ways, keywords, authors, etc. Tables were built of all the different ways articles were indexed. In effect the record number of the article became a unique key for each article. A specific keyword would have a list of all articles that the keyword was applicable for ... i.e. condensed set of 32bit integers ... the record ptr was effectively used as unique key of the article.
Boolean keyword searches ... became ANDs and ORs of the sets of unique keys (unique record ptrs). An AND of two keywords becomes the intersection of key/recordptrs from the two lists. An OR of two keywords becomes the join of key/recordptrs from the two lists. This was all built on top of underlying BDAM support.
Part of the issue attempting to replace the bdam implementation was that it was highly efficient ... having collapsed the article unique key and the corresponding record pointer into the same value (however, there was significant maintenance activity ... so significant access & thruput was needed to justify the extensive care and support). Problems also started creeping in when the number of articles started exceeding the size of the record ptr/key.
...
ISAM ... indexed sequential access method ... had the really complex channel programs. the whole structure of the database was stored on disk. a channel program would be extensive and very complex ... starting out searching for specific index record ... which would be then read into memory location which was the argument of a following search command ... this could continue for multiple search sequences ... in a channel program, until pointer for the approriate data record was found and read/written. Channel programs could have relatively complex condition testing, branching, and looping.
ISAM was an enormously I/O intensive resource hog ... and went out of favor as the trade-off between disk i/o resources and real memory resources shifted (my reference to relative system disk i/o thruput having declined by better than an order of magnitude during the period) ... and it became much more efficient to maintain index structures cached in processor storage.
ISAM channel programs were also were a real bear to provide virtualization support for.
....
for other reference, the wiki IMS page:
https://en.wikipedia.org/wiki/Information_Management_System
from above:
IBM designed IMS with Rockwell and Caterpillar starting in 1966 for the
Apollo program. IMS's challenge was to inventory the very large Bill of
Materials for the Saturn V moon rocket and Apollo space vehicle.
... snip ...
and:
In fact, much of the world's banking industry relies on IMS, including
the U.S. Federal Reserve. For example, chances are that withdrawing
money from an automated teller machine (ATM) will trigger an IMS
transaction. Several Chinese banks have recently purchased IMS to
support that country's burgeoning financial industry. Reportedly IMS
alone is a $1 billion (U.S.) per year business for IBM.
... snip ...
Bottom line in the wiki article is that IMS outperforms relational for a given task ... but requires more effort to design & maintain.
And CICS wiki page ... for much of their lives ... IMS and CICS have
somewhat competed as "transaction monitors":
https://en.wikipedia.org/wiki/CICS
For old CICS folklore ... the univ. that I was at in the 60s was selected as one of the beta test sites for the original CICS product release ... and one of the things I got tasked as an undergraduate was helping debug CICS.
and BDAM wiki page ...
https://en.wikipedia.org/wiki/Basic_direct_access_method
and ISAM wiki page (although it doesn't talk about the really
complex channel program implementation support done in the 60s):
https://en.wikipedia.org/wiki/ISAM
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pro- foreign key propaganda? Newsgroups: comp.databases.theory,alt.folklore.computers Date: Wed, 21 May 2008 08:17:47 -0400Anne & Lynn Wheeler <lynn@garlic.com> writes:
aka ... overloading a value with multiple characteristics can significantly improve runtime operation ... but can become an administrative burden to maintain the consistency of all the different characteristics.
reducing the number of different characteristics a value has to represent will reduce the consistency administrative burden but will typically increase the runtime overhead (navigating internal tables relating the different characteristics).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DASD or TAPE attached via TCP/IP Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 21 May 2008 10:02:07 -0400Michael.Knigge@SET-SOFTWARE.DE (Michael Knigge) writes:
was somehow leaked to at&t longlines circa 1975. they took this highly modified "csc/vm" vm370 release and made numerous local modifications ... including remote device support ... that would run over various kinds of communication links. basically virtual machine channel program simulation would forward the stuff to remote site for actual execution on the real locally attached device. this system manage to propagate to a number of at&t longline machines. Nearly a decade later, the at&t national account manager managed to track me down ... longlines had continued to migrate the vm370 system thru various generations of mainframes ... but it came to an end with move to 370/XA ... and he was looking for assistance in helping move longlines off that vm370 system.
this isn't all that much difference with standard i/o virtualization, aka a copy of the "virtual" channel programs are replicated with real address substituted for virtual addresses. in the case of remote device, the replicated "real" channel programs are run on remote system ... with appropriate fiddling of virtual pages on the application machine and the real pages on the machine where the device was attached.
some amount of the fiddling was handled by services running in a separate virtual machine. note this isn't all that different from what is done by various virtual machine mainframe simulators that run on various other kinds of platforms ... and include simulation of various kinds of mainframe i/o devices on completely different kinds of devices.
the specific communication mechanism used is the least of the issues.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DASD or TAPE attached via TCP/IP Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 21 May 2008 12:07:53 -0400R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
for some topic drift ... posts in a recent thread in
comp.databases.theory:
https://www.garlic.com/~lynn/2008i.html#8 pro- foreign key propaganda?
https://www.garlic.com/~lynn/2008i.html#12 pro- foreign key propaganda?
https://www.garlic.com/~lynn/2008i.html#13 pro- foreign key propaganda?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: should I encrypt over a private network? Newsgroups: comp.security.misc Date: Fri, 23 May 2008 09:49:46 -0400marlow.andrew writes:
in that period there was a story about a foreign consulate location, in one of the major city, apparently was chosen because it had line-of-site of a large microwave communication antenna array for major cross-country communication. there were comments that a lot of foreign government espionage was heavily intertwined with industrial espionage.
slightly earlier, in the early part of the 80s ... was looking at deploying dial-up access into the corporate network for both (actually major expansion for) home access (since i've had dial-up access at home since mar70) and hotel/travel access. a detailed study found that hotel pbx rooms were frequently especially vulnerable ... and as a result encryption requirement was extended to all dial-up access ... which required designing and building a custom encrypting dial-up modem for these uses.
a lot of the internet hype seems to have distracted attention from both other forms of external compromises as well as internal attackers.
for a little additional topic drift:
https://www.garlic.com/~lynn/2008h.html#87 New test attempt
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 23, 2008 Subject: Does anyone have any IT data center disaster stories? Blog: Information SecurityWhen we were doing the high availability HA/CMP product we looked at all kinds of ways that things could fail. One of the things was that over the decades, both software and hardware reliability had increased significantly. As a result the remaining failure modes tended to be human mistakes and environmental. As part of HA/CMP marketing, we had coined the terms disaster survivability and geographical survivability (to differentiate from disaster/recovery).
An example in this period, there was the garage bombing at the World Trade Center ... which included taking out a "disaster/recovery" datacenter that was located in lower floors. Later there was a large financial transaction processing center that had its roof collapse because of snow loading. Its disaster/recovery datacenter was the one in the World Trade Center (that was no longer operational).
On the other hand, long ago and far away, my wife had been con'ed into
going to POK to be in charge of loosely-coupled architecture
(mainframe for cluster). While there she created Peer-Coupled
Shared Data architecture ... which, except for IMS hotstandby,
didn't see any takeup until SYSPLEX.
https://www.garlic.com/~lynn/submain.html#shareddata
There has been another very large financial transaction processing operation that has triple replicated locations and has attributed its 100percent availability to
• automated operator • ims hot-standby
posts from slightly related discussion in comp.database.theory forum:
https://www.garlic.com/~lynn/2008i.html#8
https://www.garlic.com/~lynn/2008i.html#12
https://www.garlic.com/~lynn/2008i.html#13
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sat, 24 May 2008 11:30:39 -0400Johnny Billquist <bqt@update.uu.se> writes:
note that the 370 cache coherency resulted in slowing processor cycle down by ten percent ... a basic two-processor smp started out at 1.8 times a single processor because each processor running ten percent slower allowing for signaling and listening to the other cache. the processing of cache invalidate signals received from another cache further degraded performance (over and above the ten percent slow-down just to allow for signaling and listening).
favorite son operating system for two-processor smp typically was quoted at 1.4-1.5 times thruput a single processor ... after throwing in kernel serialization, locking, and kernel software signaling overhead.
the stuff i had done, with some slight of hand, i had gotten very close to 1.8 hardware thruput ... and in few cases got two times or better (because of some cache affinity and cache hit ratio effects).
misc. past smp posts and/or references to charlie inventing the
compare&swap instruction while working on cp67 kernel smp fine-grain
locking
https://www.garlic.com/~lynn/subtopic.html#smp
old email referencing dec announcement of symmetrical
multiprocessing (and some commencts about not considered "real"
commercial until it supported symmetrical ... vax 8800)
https://www.garlic.com/~lynn/2007.html#email880324
https://www.garlic.com/~lynn/2007.html#email880329
in this post
https://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old days?
i've frequently claimed that john's 801/risc design (trade-offs) were
based on both the heavy multiprocessor cache consistency overhead
(that didn't scale well as number of processors increase) ... and
doing the exact (KISS) opposite of what had been attempted in
the (failed) future system effort
https://www.garlic.com/~lynn/submain.html#futuresys
which was attempted to combine lots of advanced features, borrowing from tss/360, multics, and some very complex hardware interfaces. some sense of that showed up in the subsequent system/38 effort ... while 801/risc tried to do the exact opposite.
it wasn't until later generations with things like directory cache consistencyh and numa that started to see scale-up in number of processors
the work on (hardware) cache consistency implementations was also useful
in working out details of distributed lock manager for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
as well as a process that would allow (database) cache-to-cache copying (w/o first having to write to disk) while still being able to preserve acid properties
some recent posts mentioning DLM work:
https://www.garlic.com/~lynn/2008b.html#69 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2008c.html#81 Random thoughts
https://www.garlic.com/~lynn/2008d.html#25 Remembering The Search For Jim Gray, A Year Later
https://www.garlic.com/~lynn/2008d.html#70 Time to rewrite DBMS, says Ingres founder
https://www.garlic.com/~lynn/2008g.html#56 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
some recent numa/sci posts:
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#3 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#8 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#12 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#19 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#21 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008h.html#80 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#84 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#5 Microsoft versus Digital Equipment Corporation
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: American Airlines Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 24 May 2008 11:49:22 -0400lists@AKPHS.COM (Phil Smith III) writes:
one of the things that cut her stint short with amadeus was that she side with the decision to use x.25 rather than sna as the main communication protocol ... which brought out a lot of opposition from certain quarters. it didn't do them much good since amadeus went with x.25 anyway.
current amadeus website
http://www.amadeus.com/
for other archeological notes ... eastern airlines res system had been
running on 370/195. one of the things that help put the final nails
in the future system project coffin
https://www.garlic.com/~lynn/submain.html#futuresys
was analysis that if a future system machine was implemented out of the same performance technology as used in 370/195 ... and the eastern airlines res. system moved over to it ... it would have the thruput of 370/145.
wiki computer res system page
https://en.wikipedia.org/wiki/Computer_reservations_system
from above:
European airlines also began to invest in the field in the 1980s,
propelled by growth in demand for travel as well as technological
advances which allowed GDSes to offer ever-increasing services and
searching power. In 1987, a consortium led by Air France and West
Germany's Lufthansa developed Amadeus, modeled on SystemOne. In 1990,
Delta, Northwest Airlines, and Trans World Airlines formed Worldspan,
and in 1993, another consortium (including British Airways, KLM, and
United Airlines, among others) formed the competing company Galileo
International based on Apollo. Numerous smaller companies have also
formed, aimed at niche markets the four largest networks do not cater
to.
... snip ...
for totally unrelated topic drift ... at one point we were asked to
consult with one of the main reservation systems about redoing
various parts of the implementation. recent posts mentioning
doing a paradigm change in the implementation of routes:
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sat, 24 May 2008 11:52:22 -0400peter@taronga.com (Peter da Silva) writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 25, 2008 Subject: Worst Security Threats? Blog: Information Securitywe had been asked to come in and help word smith the cal. electronic signature legislation (and later the fed. legislation).
Some of the parties involved were also working on privacy issues and had done in depth consumer surveys ... and found the two most important issues were
• identity theft (mostly account fraud, fraudulent transactions against existing account)
• denial of service (institutions using personal information to the detriment of the individual
studies have regularly found that upwards of 70percent of identity theft involve "insiders".
part of the lack of attention to identity theft problem was at the basis of the subsequent cal. state. breach notification legislation (which has since also shown up in many other states).
recent article
Most Retailer Breaches Are Not Disclosed, Gartner Says
http://www.pcworld.com/businesscenter/article/146278/most_retailer_breaches_are_not_disclosed_gartner_says.html
Most retailer breaches are not disclosed, Gartner says
http://www.networkworld.com/news/2008/060508-researchers-say-notification-laws-not.html
in the mid-90s, the x9a10 financial standard working group had been
given the requirement to preserve the integrity of the financial
infrastructure for all retail payments. part of that effort was
detailed study of threats & vulnerabilities related to fraudulent
transactions. the product of the x9a10 financial standard working
group was the x9.59 financial transaction standard
https://www.garlic.com/~lynn/x959.html#x959
part of the detail threat and vulnerability study was identifying lots of infrastructure & paradigm issues ... including transaction information having diametrically opposing requirements. For security reasons, existing transaction & account information has to be kept completely confidential and never divulged. However, there are a large number of business processes that require access to the transaction and account information in order to perform transaction processing. This has led to our periodic comment that even if the planet were buried under miles of information hiding encryption ... it still wouldn't be possible to prevent breaches.
As a result, x9.59 standard slightly modified the transaction processing paradigm ... making previous transaction information useless to attackers for performing fraudulent transactions. x9.59 did nothing regarding trying to hide such information ... but x9.59 standard eliminated such breaches as threat & vulnerability.
another aspect of the detailed vulnerability and threat analysis (besides diametrically opposing requirements on transaction information, both can never be divulged and at the same time required for numerous business processes) ... was security proportional to risk. Huge part of existing attacks (both insiders and outsiders) are directed at these breaches since the results represent significant financial gain to the attackers (from the fraudulent transactions). We've estimated that the value of the information to the attackers (steal all the money in the account or run up transactions to the credit limit) is hundreds of times greater than the value of the information to the retailers (profit margin on the transaction). As a result, the attackers (insiders and outsiders) can afford to outspend the defenders possibly 100:1. In effect, the x9.59 financial standard corrected this imbalance also by removing the value of the information to the attackers. This also eliminates much of the motivation behind the phishing attacks (i.e. doesn't eliminate the attacks, just eliminates the usefulness of the information for fraudulent transaction purposes).
part of security proportional to risk came from having been asked to consult with small client/server startup that wanted to do payments transactions on their server and had this technology they had invented and wanted to use called SSL. Most people now refer to the result as electronic commerce.
One of the things that we kept running into was that none of the server operators could afford what we were specifying as the necessary minimum security (proportional to the financial risk). This was later confirmed by the x9a10 financial standard working group detail threat and vulnerabiilty studies ... and helped motivate the paradigm tweak in the x9.59 financial standard (which removed most phishing and breaches as a vulnerability, didn't eliminate phishing and breaches, just removed most of the basic financial motivation behind the phishing and breach efforts).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Mon, 26 May 2008 09:48:17 -0400jmfbah <jmfbahciv@aol> writes:
a physical mapped cache uses the physical address to identify things in the cache ... and bits from the physical address to index location in the cache.
a virtual mapped cache uses the virtual address to identify things in the cache ... and bits from the virtual address to index location in the cache. this can start cache lookup w/o waiting to perform virtual to physical address translation from the table/translation lookaside buffer.
the virtual mapped cache sharing problem is aliasing ... where the same shared (physical) location can be known by multiple different virtual addresses. this opens the possibility that the same physical data indexes to different locations in a virtual cache (because of different virtual addresses) and is known/named by different (virtual address) name/alias
the monitor will know a physical address is shared ... when it sets up the (virtual to real) translation tables ... but that doesn't mean that a virtual cache can easily figure out that a physical address is shared and known by multiple different aliases (major point of having a virtual cache is doing a quicker lookup w/o having to wait for the virtual to real translation delay from the tlb).
the issue is somewhat analogous to multiprocessor cache consistency protocols ... i.e. how to maintain consistency where the same physical data may be in different caches ... but in this case, it is the same physical data in the same cache ... but at different locations because of being known by multiple different names/aliases.
the assumption here is that the cache is large enuf that it attempts to maintain locations for multiple different virtual address spaces (and doesn't flush the cache whenever there is virtual address space or context switch). this is analogous to table/translation look aside buffer keeping virtual to physical address mappings for multiple different virtual address spaces (as opposed to flushing all mappings whenever there is context or virtual address space change). this is the problem where different people have the same name and it is necessary to differentiate which person you are talking about (as opposed to the situation where the same person has multiple different aliases).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Mon, 26 May 2008 09:52:32 -0400glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
3.11.4 Translation-Lookaside Buffer
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/3.11.4?SHELF=DZ9ZBK03&DT=20040504121320
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 26, 2008 Subject: Credit Card Fraud Blog: Information SecurityIn the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. part of the effort was detailed end-to-end threat & vulnerability study.
one of the major threats & vulnerabilities identified was being able to use information from previous transactions enabling fraudulent transactions (i.e. skimming at pos, evesdropping on the internet, security breaches and data breaches of log files, and lots of other kinds of compromises). we have sort of made reference to the general phenomena as the "naked transaction" (where ever it exists, it is vulnerable).
the x9a10 financial standard working group produced the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
... which slightly tweaked the paradigm, eliminating the "naked
transaction" phenomena ...
https://www.garlic.com/~lynn/subintegrity.html#payments
aka it didn't do anything about attempting to hide the information (from previous transactions) ... it just eliminated attackers being able to use the information for fraudulent transactions.
somewhat related answer
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/237628-24760462
also
https://www.garlic.com/~lynn/2008i.html#21 Worst Security Threats?
part of the x9.59 financial standard protocol had been based on the earlier work we had done on what is now usually referred to as electronic commerce. we were asked to consult with a small client/server startup that wanted to do financial transactions on their servers and had this technology called SSL they had invented and wated to use. The major use of SSL in the world today is involved with this thing called electronic commerce and hiding information related to the transactions.
Part of the x9.59 standard was eliminating the need to hide financial transaction information as countermeasure to fraudulent transactions ... which then can be viewed as also eliminating the major use of SSL in the world today.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Tue, 27 May 2008 08:58:34 -0400Johnny Billquist <bqt@update.uu.se> writes:
so virtual cache problem can be akin to the multiprocessor cache coherency ... the same physical location can appear in multiple different places.
typically a group of cache lines is "indexed" by a set of bits from the location address (and then that set is check to see if the required information is already loaded ... and if not ... one of the cache lines is selected for replacement).
in a virtual cache ... some of the bits may come from the "page" displacement ... i.e. that part of the address that is from the page displacement from of the virtual address. those set of bits would be the same for a physical address that might be known/loaded by different virtual addresses. other parts of the cache index bits may come from that part of the location address that is greater than the page displacement ... and may be different for physical location that is shared in different virtual address spaces at different location (alias problem).
so one of the approaches to virtual cache coherency is if the desired location isn't in the cache ... it is a miss and has to start a real storage fetch. however, overlapped (which might take thousands of cycles) it could check the other possible alias locations. it has to interrogate the TLB to get the real address (for the missing cache line) in order to do the real memory fetch. it then could also look at all the possible alias locations for the same physical data ... while it is waiting for the real storage fetch (alternatively it could simply invalidate all virtual cache lines that might be an alias).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Tue, 27 May 2008 09:13:55 -0400jmfbah <jmfbahciv@aol> writes:
if it is a hardware managed cache ... on a cache miss (can't find it), the hardware interrogates the TLB for the real address ... and sends out a request to real memory to load the cache line for that real address. similarly, when a (store into) cache line is being replaced and the (changed) information has to be flushed to real storage ... it can interrogate the TLB for the real address.
however, some virtual caches may keep the virtual cache line "tagged" with both the virtual address as well as the real address (when it is loaded) ... even thot the cache line isn't "indexed" by the real address; i.e. in virtual cache that is simultaneously keeping track of cache lines from multiple different virtual address spaces ... it already has to track the virtual address space identifier and virtual address for each cache line ... in addition it might also remember the physical address (even tho the real address isn't used to index the cache line). when a (modified) cache line has to be written back to storage ... this allows the operation to start immediately w/o the hardware having to do a separate interrogation of the TLB to get the corresponding real address.
so in this discussion about analogy between multiprocessor cache
coherency and virtual caches that support aliases (same real
address known by multiple different virtual addresses)
https://www.garlic.com/~lynn/2008i.html#25 Microsoft versus Digital Equipment Corporation
if the cache is keeping the real address as part of a cache line tag, then it can look at all possible alternative alias locations that the same real address might appear and only invalidate/remove it if it has a matching real address.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Tue, 27 May 2008 09:58:01 -0400jmfbah <jmfbahciv@aol> writes:
both caches and TLBs are frequently partially associative. TLBs sometimes are hardware managed cache of translation table information (virtual address to real address mapping) that are located in real storage (although some TLBs are software managed ... and require the monitor to load values).
typical cache entries (and hardware TLB managed entries) are broken into sets of entries. say a 2mbyte cache with 128 byte cache lines ... has 16,384 cache lines. If the cache is 4-way associative, the cache is broken up into 4096 sets of fur cache lines each. The cache then needs to use 12 bits from the original address (real or virtual) to index one of the 4096 set of four cache lines ... and then check all four of those cache lines (i.e. associative) whether they match the desired address.
(hardware) TLBs tend to work similarly (caching virtual to real address information), they have a set of entries that may be 2-way or 4-way. Bits from the address are used to index a specific set of entries and then all entries in that set are check for match.
This is a trade-off between the circuits/delay required to do a fully associative check and the interference that can happen when a whole set of different addresses map to the same, single entry and start "thrashing".
there was an issue with 370/168 TLB in the bits it used to index the TLB. These were 16mbyte/24bit virtual address machines. There were 128 TLB entries ... and one of the bits used to index TLB entries was the "8 mbyte bit" (i.e 24bits, numbered 0-23, the first or zero bit). The favorite son operating system was designed that the kernel occupied 8mbytes of each virtual address space and (supposedly) the kernel had the other 8mbytes. The result was that typically half of the TLB entries were filled with kernel virtual addresses and half the TLB was filled with appication virtual addresses. However, for vm370/cms, the cms virtual address space start at zero ... and extended upwards ... and most applications rarely crossed the 8mbyte line ... so frequently half the 370/168 TLB entries would go unused.
370 TLBs were indexed with low associative ... however, 360/67 "look-aside" (hardware virtual to real mapping) wasn't referred to as TLB ... it was called the associative array ... since it was fully associative (all entries interrogated in parallel).
there was sort of a virtual/real cache issue with the introduction of the 370/168-3 which doubled the 32k cache (from 370/168-1) to 64k cache. The number of sets of cache line were such that the index bits could be taken purely from the page displacement of the address ... which would be the same whether it was a virtual address or a real address.
370 had support for both 2k virtual page size mode and 4k virtual page size mode. With 32k cache ... there was no difference for 2k & 4k page sizes ... however for 64k cache, they took the "2k" bit as part of cache line indexing. As a result, a 168-3, when operating in 4k virtual page mode would use the full 64k cache ... but when operating in 2k virtual page mode would only use 32k cache. And in any transition between 2k and 4k modes ... the cache would be flushed ... since the mappings were different. Now some customers running vm370 with VS1 virtual guest (batch operating system that ran with 2k virtual page sized) on 168-1, upgraded to 168-3 and performance got much worse.
Nominally, VS1 would run on 168-3 with 32k cache ... just like it was a 168-1 ... and shouldn't have seen any performance improvement (as opposed to performance decrease). The problem was that vm370 defaulted hardware settings to 4k page mode ... except when 2k page mode was specifically requested. The result was that vm370 (when running virtual VS1 or DOS/VS) was frequently making hardware switch back and forth between 2k and 4k page modes. On all other machines ... this wasn't a problem ... but on 168-3 ... this resulted in the cache having to be flushed (every time the switch occurred).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Scalable Nonblocking Data Structures Newsgroups: alt.folklore.computers Date: Tue, 27 May 2008 19:36:49 -0400Scalable Nonblocking Data Structures
Cliff Click on a Scalable Non-Blocking Coding Style
http://www.infoq.com/news/2008/05/click_non_blocking
from above:
The major components of Click's work are:
...
2. Atomic-update on those array words (using
java.util.concurrent.Atomic.*). The Atomic update will use either
Compare and Sweep (CAS) if the processor is Azul/Sparc/x86, or Load
Linked/Store-conditional (LL/SC) on the IBM platform.
... snip ...
or maybe compare&swap ... invented by charlie (i.e. CAS are charlie's
initials) when he was doing work on multiprocessing fine-grain locking
for cp67 virtual machine system at the science center. misc. past posts
mentioning smp and/or compare&swap
https://www.garlic.com/~lynn/subtopic.html#smp
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 27, 2008 Subject: What is your definition of "Information"? Blog: Information Storageold definition we used from 15-20 yrs ago:
we had looked at copyrighting the term business science in the early
90s, somewhat in conjunction with this graph ... old post from 1995
archived here ...
https://www.garlic.com/~lynn/95.html#8aa
subsequently there have been more simplified version of the above diagram
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: subprime write-down sweepstakes Newsgroups: alt.folklore.computers Date: May 29, 11:59 amlynn wrote:
Did Wall Street Wreck The Economy?, Congress, regulators start to
connect the dots
http://www.consumeraffairs.com/news04/2008/05/wall_street.html
from above:
If so, that thread may lead to Wall Street. Increasingly, everyone
from lawmakers to industry insiders has been connecting the dots to
reveal how some investors' actions have had huge repercussions on the
economy.
... snip ...
as mentioned previously .... toxic CDOs were used two decades
ago in the S&L crisis to obfuscate the underlying value ... and in
this decade-old, long-winded post .... there is discussion about need
for visibility into CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: May 30, 2008 Subject: Mastering the Dynamics of Innovation Blog: Change ManagementMany times, existing processes represent some organization, technology, and/or business trade-offs that occurred at some point in time. such trade-offs become institutionalized ... and there is frequently failure to recognize that when the environment has changed (that original trade-off assumptions were based on) ... that the resulting trade-off decisions are no longer valid.
for truely off the wall way for viewing this is myers-briggs personality traits ... where the majority of the population tends to be personality types that operate on previous experience ... and only an extremely small percentage of the population routinely operate based on analytical analysis. it is much more difficult for experiential personality types to operate out of the box and routinely view and operate purely analytically. It is much easier to for the analytically oriented to recognize that basis for originally trade-off decisions to have totally changed.
this also can show up as generational issues where the young experiential personality types (that still need to be molded by experience) tend to be much more open to different ways of doing things (but that tends to gradually change as they gain experience). Analytically oriented personalities tend to live their whole life questioning rules and authorities (not just in youth).
I've periodically commented that from an evolutionary aspect, in a static, stable environment ... constantly having to analyze and figure out the reason why things are done, represents duplication of effort (effectively waste of energy). However in a changing environment, it can represent a significant more efficient means of adapting to change (compared to experimental trial and error approach). One possible study might be are their shifts in the ratio of different personality types based on whether the environment is static or rapidly changing.
Circa 1990, one of the large US auto manufacturing companies had a C4 effort that was to look at radically changing how they did businesses and they invited some number of technology vendors to participate. One of their observations was that US industry was (still) on 7-8 yr new product cycle while foreign competition radically reduced the elapsed time to turn out new products. Being faster also makes it easier to address all sorts of other issues (including quality). Introducing change is easier if it is done in new cycle... and if the new cycles are happening faster and much more frequently ... it promotes agility/change.
... aka being able to operate Boyd's OODA-loop faster than the
competition. lots of past posts mentioning Boyd and/or OODA-loops
https://www.garlic.com/~lynn/subboyd.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: alt.folklore.computers Date: Sun, 1 Jun 2008 14:12:40 -0700 (PDT) Subject: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish FirstA Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
from above:
During the 1970s and '80s at I.B.M. and Tandem Computer, he helped
lead the creation of modern database and transaction processing
technologies that today underlie all electronic commerce and more
generally, the organization of digital information. Yet, for all of
his impact on the world, Jim was both remarkably low-key and
approachable. He was always willing to take time to explain technical
concepts and offer independent perspective on various issues in the
computer industry
... snip ...
Tribute to Honor Jim Gray
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 1, 2008 Subject: Mainframe Project management Blog: Computers and SoftwareMainframe efforts have tended to be much more business critical and therefor have tended to spend a lot more time making sure that there are no problems and/or if problems might possibly show up, functions are provided that anticipate and can handle the problems.
As part of the support for the internet payment gateway as part of
what is now referred to as electronic commerce (and the original SOA)
... the initial implementation involved high quality code development
and testing.
https://www.garlic.com/~lynn/subnetwork.html#gateway
However, we have ofter commented to take a traditional application and turn it into a business critical service can require 4-10 times the base development effort. Part of the subsequent payment gateway effort we developed a failure matrix ... all possible ways that we could think of that a failure might occur involving the payment gateway ... and all the possible states that a failure could occur in. It was then required that the payment gateway show that it could automatically handle/recover all possible failure modes in all possible states ... and/or demonstrate that the problem could be isolated and identified within a very few minutes.
A much earlier example ... as part of turning out the mainframe
resource management product,
https://www.garlic.com/~lynn/subtopic.html#fairshare
the final phase involved a set of over 2000 validation and calibration
benchmarks that took over 3 months elapsed time to run. This included
a sophisticated analytical system performance model which would
predict how the system was expected to operate under various
conditions (workload and configuration) ... automatically configure
for that benchmark, automatically run the benchmark and then validate
whether the results matched the predicted.
https://www.garlic.com/~lynn/submain.html#benchmark
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: bit.listserv.ibm-main, alt.folklore.computers Date: Sun, 1 Jun 2008 20:29:51 -0700 (PDT) Subject: Re: American Airlineshancock4 writes:
there was airline control program (ACP) that was (vendor) operating
system used for many of these online systems .... wiki page
https://en.wikipedia.org/wiki/Airlines_Control_Program
there was long period of evolution of the ACP operating system as well as the customer applications built on the operating system. In some sense SABRE is a brand which is a whole bunch of online applications that were (initially) built on ACP. Some number of the other airline res "systems" were also whole set of applications built using the ACP operating system. Currently, some number of the applications have been migrated to other platforms.
circa 1980 or so ... there were some number of financial institutions
using ACP for financial transactions ... which led to renaming ACP to
TPF (transaction processing facility) ... wiki page
https://en.wikipedia.org/wiki/Z/TPF
from above:
Current users include Sabre (reservations), Amadeus (reservations),
VISA Inc (authorizations), Holiday Inn (central reservations), CBOE
(order routing), Singapore Airlines, KLM, Qantas, Amtrak, Marriott
International , worldspan and the NYPD (911 system).
... snip ...
For some "transaction" drift ... yesterday, a tribute was held for Jim
Gray
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
Bruce Lindsay gave a great talk about Jim formalizing transactions and databases management ... to provide sufficient integrity and reliability that they could be trusted in lieu of paper entries ... which was required to make things like online transaction processing possible (i.e. it was necessary to demonstrate high enough integrity and reliability that it would be trusted in place of paper and human/manual operations).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: bit.listserv.ibm-main, alt.folklore.computers Date: Sun, 1 Jun 2008 21:00:29 -0700 (PDT) Subject: Re: American AirlinesOn May 31, 7:42 am, wrote:
the reference to doing 10 impossible things
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?
mentions having to do a major paradigm change in how things were implemented. Part of the 10 impossible things were because of heavy manual involvement in how the information was preprocessed for use by the system. Part of the major paradigm change involved effectively totally eliminating all that manual preprocessing ... making it all automated.
Some number of the 10 impossible things were also performance/thruput limitations related. So part of the paradigm change was to make some things run 100 times faster. This allowed 3-4 separate queries to be collapsed into a single operation, improving human factors (since it was now possible to do a lot more, a lot of back&forth interaction with an agent could all be automated).
A combination of the human involvement in data preprocessing and performance limitations resulted in limitation on the number of flight segments that could be considered. Change in paradigm resulted in all flt segments in the world being easily handled
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: alt.folklore.computers Date: Sun, 1 Jun 2008 21:15:55 -0700 (PDT) Subject: Re: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish Firstre:
lynn wrote:
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html
from above:
Gray is known for his groundbreaking work as a programmer, database
expert and Microsoft engineer. Gray's work helped make possible such
technologies as the cash machine, ecommerce, online ticketing, and
deep databases like Google. In 1998, he received the ACM A.M. Turing
Award, the most prestigious honor in computer science. He was
appointed an IEEE Fellow in 1982, and also received IEEE Charles
Babbage Award.
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: bit.listserv.ibm-main, alt.folklore.computers Date: Mon, 2 Jun 2008 07:54:43 -0700 (PDT) Subject: Re: American AirlinesWarren Brown wrote:
I was recently reviewing some old email exchanges with Jim Gray from late 70s and there was one discussing the 3830 (disk controller) ACP (lock) RPQ ... which basically provided logical locking function in the controller ... for coordinating multiple loosely-coupled (i,.e. mainframe for cluster) processors.
the old research bldg. 28, ... where the original relational/sql work
was done
https://www.garlic.com/~lynn/submain.html#systemr
was just across the street from bldg 14 (disk engineering lab) and
bldg. 15 (disk product test lab) ... and they let me play disk
engineer over there
https://www.garlic.com/~lynn/subtopic.html#disk
During Jim's tribute, people were asked to come up and tell stories. The story I told was that Jim and I use to have friday evening sessions at some of the local establishments in the area (when eric's deli opened across from the plant site, they let us use the back room and gave us pitchers of anchor steam at half price). One Friday evening we were discussing what kind of "silver bullet" application could we deploy that would entice more of the corporation (especially executives) to actually use computers (primarily online vm370) and we came up with the online telephone book. However one of the requirements was that Jim would implement his half in 8hrs and I would implement my half in 8hrs.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: bit.listserv.ibm-main, alt.folklore.computers Date: 2 Jun 2008 10:15:34 -0700 Subject: Re: American AirlinesEric Chevalier wrote:
w/o the ACP RQP, loosely-coupled operation required reserve/release commands ... which reserved the whole device for the duration of the i/o operation. Actually reserve could be issued and possibly multiple operations performed before issuing the release (traditional loosely- coupled opeation ... locking out all other processors/channels in the complex).
since it was logical name locks, there was significant latitude it choosing lock names ... could be very low level like record name ... i.e. cchhr .... or something higher level like PNR.
note that while ACP/TPF did a lot of work on loosely-coupled, it took them quite awhile to getting around to doing tightly-coupled multiprocessor support. The result was quite a bit of consternation in the 3081 timeframe ... which originally wasn't going to have a single processor offering. One of the side-effects was that there were a whole bunch of changes that went into vm370 for enhancing TPF thruput in a 3081 environment ... changes that tended to degrade thruput for all the non-TPF customers. Eventually, there was enough pressure, that a 3083 (single processor) was offered ... primarily for ACP/TPF customers.
There was another technique for loosely-coupled operation ...
originally developed for HONE (avoiding the performance impact of
reserce/release but w/o the airlines controller RPQ). HONE was the
world-wide, online (vm370-based) sales & marketing support system.
https://www.garlic.com/~lynn/subtopic.html#hone
The technique was basically a special CCW sequence that leveraged CKD search commands to simulate the semantics of the mainframe compare&swap instruction (but for DASD i/o operation). The US HONE datacenter provided possibly the largest singie system image at the time (combination of multple loosely-coupled, tightly-coupled processor complex) with load-balancing and fall-over across the complex. Later this was extended to geographic distance with replicated center in Dallas and then a 3rd in Boulder.
There was then talks with the JES2 multi-access spool people about them using the same CCW technique in their loosely-coupled operation.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: bit.listserv.ibm-main, alt.folklore.computers Date: Mon, 2 Jun 2008 17:04:42 -0700 (PDT) Subject: Re: American AirlinesJohn P. Baker wrote:
note by comparison, reserve will be a CCW that "locks" the whole device ... which typically will be followed by some sort of seek/search/read. That ends and the processor then operates/updates the data read and then writes it back ... finally releasing the device.
The other approach mentioned ... developed at HONE was simulation of multiprocessor compare&swap instruction ... using "search key& data equal" ... the data is read (w/o lock or reserve), a copy is made and the update is applied. then a channel program with search key&data equal ... using the original read image .... chained to write of the updated data.
the following from long ago and far away ...
Date: March 25, 1980
Subject: DASD sharing in ACP using the RPQ
On Monday I bumped into xxxx & yyyy. They were both interested in
shared DASD for availability and load sharing. I mentioned the ACP
RPQ which puts a lock manager in the microcode for the disk
controller. They were real interested in that and so I began
telephoning.
My first contact was xxxxx of GPD San Jose who wrote the microcode
(nnn-nnnn). He explained that the RPQ "has a low profile" because it
is not part of the IBM strategy and is inconsistent with things like
string switching. The basic idea is that Lock commands have been
added to the controller's repertoire of commands. One issues LOCK
READ CCW pair and later issues WRITE UNLOCK CCW pair. If the lock
fails the read fails and the CPU will poll for the lock later. xxx
has documented all this in the technical report TR 02.859 "Limitied
Lock Facility in a DASD Control Unit" xxxxx, xxxxx, xxxxx (Oct. 1979).
xxx pointed me to xxx xxxxx at the IBM Tulsa branch office (nnn-nnnn).
xxxx wrote the channel programs in ACP which use the RPQ. He said
they lock at the record level, and that it works nicely. We also
discussed restart. He said that the code to reconfigure after CPU or
controller failure was not hard. For duplexed files they lock the
primary if available, if not they lock the secondary. ACP allows only
one lock at a time and most writes are not undone or redone at restart
(most records are not "critical"). xxx said that their biggest
problem was with on-line utilities. One which moves a volume from
pack to pack added 50% to their total effort! xxx in turn pointed me
to two of the architects.
xxxxxx at White Plains DPD (nnn-nnnn) knows all about ACP and promised
to send me the documentation on the changes to ACP. He said the
changes are now being integrated into the standard ACP system. He
observed that there is little degradation with the RPQ and prefers it
to the MP approach. He mentioned that there are about 65 ACP
customers and over 100 ACP systems. xxxxx is also at White Plains
(nnn-nnnn). He told me lots of numbers (I love numbers).
He described a 120 transaction/second system.
The database is spread over about 100 spindles.
Each transaction does 10 I/O.
10% of such I/O involve a lock or unlock command.
The average hold time of a lock is 100 ms.
1.7 lock requests wait per second.
That implies that 14% of transactions wait for a lock.
This is similar to the System R number that 10% of transactions wait.
ACP has deadlock avoidance (only hold one lock at a time).
There are 60 lock requests per second (and 60 unlocks) and so there
are about 6 locks set at any instant.
This is not a heavy load on the lock managers (a controller is likely
to have no locks set.)
... snip ... top of post, old email index
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: alt.folklore.computers Date: Tue, 3 Jun 2008 08:05:21 -0700 (PDT) Subject: Re: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish Firstre:
and a little related drift in this thread:
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
another article
Tech luminaries honor database god Jim Gray
http://www.theregister.co.uk/2008/06/03/jim_gray_tribute/
from above:
"A lot of the core concepts that we take for granted in the database
industry - and even more broadly in the computer industry - are
concepts that Jim helped to create," Vaskevitch says, "But I really
don't think that's his main contribution."
... snip ...
and some old email references when Jim was leaving for Tandem and
tyring to hand off some number of responsibilities to me:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016
thread from last year on Jim having gone missing:
https://www.garlic.com/~lynn/2007d.html#4 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#6 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#8 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#33 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007g.html#28 Jim Gray Is Missing
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: bit.listserv.ibm-main, alt.folklore.computers Date: Tue, 3 Jun 2008 09:23:19 -0700 (PDT) Subject: Re: American AirlinesShmuel Metz , Seymour J. wrote:
old post with some product "code" names
https://www.garlic.com/~lynn/2007e.html#38 FBA rant
there were some early -13 (& -23) literature showing 90percent cache hit rate. i pointed out that the example was actually 3880 with 10 records per track and reading sequentially. the first record read for a track would have a miss and bring in the whole track and then the subsequent 9 reads would all be hits. I raised the issue that if the application were to do full-track buffer reads ... that the same sequently read would drop to zero percent hit rate.
past posts in this thread:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#35 American Airlines
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#38 American Airlines
https://www.garlic.com/~lynn/2008i.html#39 American Airlines
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 3, 2008 Subject: Security Breaches Blog: Information SecurityWe had been called in to help word smith the cal. state electronic signature (and later federal) legislation. past refs
Some of the involved organizations were also involved in privacy issues and had done in-depth consumer surveys and found that the two major issues were
1) identity theft ... account fraudulent transactions affecting most people and stats have been that upwards of 70percent of the incidents involved insiders
2) denial of service ... institutions using personal information to the detriment of the individual
because so little attention was being paid to the root causes behind these activities, it became major motivation for the cal. state breach notification legislation (and subsequent similar legislation in other states) ... hoping that the mandated breach notification and associated publicity would start to result in something being done about the problems.
Earlier we had been asked to consult with a small client/server
startup that wanted to do payment transactions on their server and had
this technology called SSL they had invented and wanted to use (now
frequently referred to as electronic commerce). Some number of past
posts referring to the activity
https://www.garlic.com/~lynn/subnetwork.html#gateway
We then got roped into working on the x9.59 financial transaction in
the x9a10 financial standard working group. In the mid-90s, X9A10 had
been given the requirement to preserve the integrity of the
financial infrastructure for all retail payments ... misc. past
references
https://www.garlic.com/~lynn/x959.html#x959
part of the activity involved in-depth, end-to-end threat and vulnerability studies. this including focusing on the types of problems that have represented the majority of the breaches reported in the news over the past several years.
There were (at least) two characteristics
1) in the current paradigm, account information, including previous transaction information, represents diametrically opposing security requirements. on one side, the information has to be kept completely confidential and never divulged to anybody. on the other side, the information has to be readily available for numerous business processes in order to execute transactions (like presenting/divulging information at point of sale).
2) the value of the account related information in (merchant) transaction logs can be 100 times more valuable to the crooks than to the merchant. Basically to the merchant, the information is worth some part of the profit off the transaction. To the crook the information can be worth the credit limit and/or account balance for the related account. As a result, the crooks may be able to afford to spend 100 times attacking the system as the merchants can afford to spend (on security) defending the system.
So, one of the parts of x9.59 financial standard was to tweak the paradigm and eliminate the value of the information to the crooks and therefor also the necessity to have to hide the information at all (it didn't do anything to prevent what has been the majority of the breaches in the past several years ... it just eliminated any of the fraud that could occur from those breaches ... and therefor any threat the breach would represent).
misc. past posts mentioning fraud, exploits, threats, vulnerabilities,
and/or risk
https://www.garlic.com/~lynn/subintegrity.html#fruad
the major use of SSL in the world today is this thing we worked on now
commonly referred to as electronic commerce ... lots of past
references to various aspects of SSL
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
where SSL is primarily being used to hide the account and transaction information. Since x9.59 financial standard eliminates the need to hide that information (as a countermeasure to fraudulent financial transactions) .... it not only eliminates the threat from security/data breaches but also eliminates the major use of SSL in the world today
some late breaking news:
Researchers say notification laws not lowering ID theft
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9093659
Researchers say notification laws not lowering ID theft
http://www.networkworld.com/news/2008/070308-citibank-card-scammer-sweatshirt.html
Researchers Say Notification Laws Not Lowering ID Theft
http://news.yahoo.com/s/pcworld/146738
Researchers say notification laws not lowering ID theft
http://www.infoworld.com/article/08/06/05/Notification-laws-not-lowering-ID-theft_1.html
Researchers Say Notification Laws Not Lowering ID Theft
http://www.pcworld.com/businesscenter/article/146738/researchers_say_notification_laws_not_lowering_id_theft.html
with regard to the paradigm involving transaction information ... on one hand can never be exposed or made available (to anyone) and on the other hand, by definition, the transaction information has to be available in numerous business processes as part of performing transactions.
we've tried using the comments (in the current paradigm) that even if the world was buried under miles of (information hiding) encryption, it still wouldn't prevent information leakage.
we've also tried in detailed discussions using the analogy of "naked
transaction" metaphor ...
https://www.garlic.com/~lynn/subintegrity.html#payments
a military analogy is position in open valley with no cover and the enemy holding all the high ground on the surrounding hills (or like shooting fish in a barrel).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 3, 2008 Subject: IT Security Statistics Blog: Information SecurityA couple years ago ... i worked on classification of the reported exploits/vulnerabilities. The problem was that at the time the descriptions quite free-form and it took a bit of analysis to try and pry out information for classification. In the past year or so, there has been some effort to add categorizing information to the descriptions. I also wanted to use the resulting classification information in updating my merged security taxonomy and glossary.
Old post referencing attempting classification of CVE entries
https://www.garlic.com/~lynn/2004e.html#43
Also, some number of the more recent classification activities have tended to corroborate my earlier efforts.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 3, 2008 Subject: Are multicore processors driving application developers to explore multithreaded programming options? Blog: Software DevelopmentCharlie had invented compare&swap instruction when working on fine-grain multiprocessor locking for cp67 on 360/67. Trying to get compare&swap instruction added to 370 machines was met with some resistance ... claims being that the test&set instruction was sufficient for multiprocessor kernel operations.
The challenge for getting comapre&swap instruction added to 370, a non-kernel, multiprocessor specific use had to be created. The result was a set of examples for multithreaded application operation coordination avoiding the overhead of kernel calls.
compare&swap was used in implementation of the original relational/sql implementation, system/r .... for multithreaded operation ... independent of whether running on a uniprocessor or multiprocessor. By the mid-80s, compare&swap (or similar instruction) was available on many processors and in use by major database implementations for multithreaded operation ... independent of whether running on a single processor or multiprocessor machine.
In the past, there was been increasing processor performance in both single processor as well as multiprocessor hardware. Recently that has changed with little advances in single processor performance ... and lots of vendors are moving to multicore as standard ... where additional throughput will only be coming via concurrent/multithreaded operation.
There has been numerous observations for the past year or two that parallel programming has been the "holy grail" for the past twenty years ... with little or no practical advances in what the majority of programmers are capable of doing (with respect to parallel/concurrent/multithreaded programming).
lots of past posts mentioning multiprocessor operation and/or
compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp
misc. past posts mentioning original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr
the original (compare&swap writeup) is almost 40yrs old now ... but
here are some of the examples still in a recent principles of
operation (and over the yrs have been picked up by a large number of
different machines, systems, and applications) ... note
"multiprogramming" is mainframe for multithreaded
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320
btw, cp67 was morph of the original virtual machine implementation, cp40 from the custom modified 360/40 to 360/67 that came standard with with virtual memory hardware
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: alt.folklore.computers Date: Wed, 4 Jun 2008 10:17:01 -0700 (PDT) Subject: ARPANet architect: bring "fairness" to traffic managementARPANet architect: bring "fairness" to traffic management
can you say the "wheeler scheduler"
https://www.garlic.com/~lynn/subtopic.html#fairshare
one of the things we had done as part of rate-based flow control and
dynamic adaptive high-speed backbone (and the letter from nsf said
that what we already had running was at least five years ahead of all
nsfnet backbone bids ... it is 20yrs later)
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: lynn@garlic.com Newsgroups: alt.folklore.computers Date: Wed, 4 Jun 2008 10:54:42 -0700 (PDT) Subject: Re: Definition of file spec in commandsOn Jun 4, 6:39 am, greymaus wrote:
misc. past posts mentioning "naked" public key kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Seeking (former) Adventurers Date: Wed, 04 Jun 2008 16:37:28 -0400 Newsgroups: bit.listserv.vmesa-lfollowing are a couple of emails from '78 regarding getting a copy of adventure for vm370/cms
in this post
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
additional followup in this post
https://www.garlic.com/~lynn/2006y.html#19 The History of Computer Role-Playing Games
another old adventure email reference
https://www.garlic.com/~lynn/2007o.html#email790912
in this post
https://www.garlic.com/~lynn/2007o.html#15 "Atuan" - Colossal Cave in APL?
In the above, there was some amount of trouble caused by my making adventure (executable) available internally (via the internal network). I had an offer that anybody finishing the game (getting the points), i would send them a copy of the (fortran) source. At least one of the people at the STL lab converted the fortran source to PLI and added a bunch of additional rooms/pts.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 5, 2008 Subject: Anyone know of some good internet Listserv's? Blog: BloggingI got blamed for mailing lists and online computer conferencing on the internal network in the late 70s and early 80s ... i.e. the internal network was larger than the arpanet/internet from just about the beginning until sometime mid-85
somewhat as a result, there was official corporate support which led to "TOOLSRUN" that had both a usenet-mode of operation as well as mailing list mode of operation.
later there was also extensive corporate support for educational
network in both the US (bitnet) and europe (earn)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
the example of toolsrun on the internal network somewhat promoted the creation and evolution of LISTSERV on bitnet.
That has since greatly evolved, been ported to a large number of
different platforms and has a corporation marketing it ... history of
LISTSERV from the vendor's website
http://www.lsoft.com/products/listserv-history.asp
This URL has catalog of LISTSERV lists
http://www.lsoft.com/CataList.html
This page:
http://catalist.lsoft.com/resources/listserv-community.asp?a=4
mentions 51,097 "public" mailing lists and 318,413 "local" mailing lists.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 5, 2008 Subject: Can I ask you to list the HPC/SC (i.e. th High performace computers) which are dedicated to a problem? Blog: Computers and SoftwareA lot of grid, blade and other (massive parallel) technologies evolved for numerical intensive like high energy physics. In the past several years, you see vendors trying to move the products into more commercial areas. Early adopters have been in financial industry. Recent x-over article
from above ...
JPMorgan and Citigroup attempt to increase flexibility and save money
by establishing division- and company-wide services-based
grids. Managing one larger, more inclusive grid is cheaper than
managing 10 line-of-business clusters, and the shared services model
allows for business applications to join computing applications on the
high-performance infrastructure.
... snip ...
One of the issues is that the management for these large resource
intensive applications has a lot of similarities to mainframe batch
"job" scheduling ... reserving the resources necessary for efficient
execution. An example within the GRID community
http://www.cs.wisc.edu/condor/
top500 by industry, w/financial largest category after "not specified"
http://www.top500.org/stats/list/30/apparea
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers Date: Fri, 06 Jun 2008 05:20:05 -0400re:
at '91 SIGOPS (SOSP13, Oct 13-16) held at Asilomar, Jim and I had a
running argument about whether "availability" required proprietary
hardware ... which spilled over into the festivities at the SIGOPS
night Monterey aguarium session ... past references to the "argument"
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2005d.html#2 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2006o.html#24 computational model of transactions
Anne and I were in the middle of our ha/cmp product with "commodity"
hardware
https://www.garlic.com/~lynn/subtopic.html#hacmp
as well as our "cluster" scale-up activities ... old email references:
https://www.garlic.com/~lynn/lhwemail.html#medusa
and only a dozen weeks away from the meeting referenced in this post:
https://www.garlic.com/~lynn/95.html#13
Jim was nearly a decade with proprietary "availability" hardware
... first at Tandem and then had moved on to DEC (vax/cluster) ... he
was there until DEC database group was sold off to Oracle in '94
... reference here
https://en.wikipedia.org/wiki/Oracle_Rdb
As per previous references ...it was only fitting that later he was up on the stage espousing availability and scale-up for Microsoft clusters.
podcast reference for the tribute:
tribute also by ACM SIGMOD
https://web.archive.org/web/20111118062042/http://www.sigmod.org/publications/sigmod-record/0806
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers Date: Fri, 06 Jun 2008 05:31:26 -0400Anne & Lynn Wheeler <lynn@garlic.com> writes:
oh, and my little short story at the tribute is 1:14 minutes (near the end) into 23083 podcast
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Fri, 06 Jun 2008 10:58:40 -0400Eric Smith <eric@brouhaha.com> writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 6, 2008 Subject: Digital cash is the future? Blog: Information SecuritySome related are these two answers with regard to security breaches
which discusses some of the vulnerabilities are characteristic of the underlying paradigm ... which require fundamental changes ... not just papering over.
We had been brought in to consult with small client/server startup that had invented this technology called SSL that they wanted to use for payment transactions on their server ... the result is now frequently referred to as electronic commerce.
There have several digital cash efforts in the past ... all of them running into various kinds of problems. One example was Digicash ... and as part of the liquidation, we were brought in to evaluate various of the assets.
Another was Mondex. As part of potential move of Mondex into the states we had been asked to design, spec, and cost system for country-wide deployment.
It turned out that many of these digital cash efforts were some flavor of "stored value" ... and were significantly motivated by the digital cash operator holding the "float" on the value in the infrastructure. During the height of these efforts more than a decade ago in Europe ... the EU central banks issued statements that the operators would have to start paying interest on the value in the accounts (once past the startup phase). That statement significantly reduced the interest (slight pun, i.e. the expected float disappeared) in many of the efforts
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Trusted (mainframe) online transactions Newsgroups: bit.listserv.ibm-main Date: Fri, 06 Jun 2008 13:48:18 -0400lynn writes:
a couple recent posts referencing podcast files of the tribute
https://www.garlic.com/~lynn/2008i.html#50
https://www.garlic.com/~lynn/2008i.html#51
the first presentation in the technical sessions was by Bruce Lindsay
talking about Jim's days at IBM San Jose research and working on the
original relational/sql implementation ... system/r ... various past
posts
https://www.garlic.com/~lynn/submain.html#systemr
a big part of Bruce's presentation was Jim's formalization of transaction semantics and database operation that turned out to be the critical enabler for online transactions (being trusted and could replace manual/paper).
... oh and my remembrance story (above reference) is 1hr 14mins into the technical session podcast that starts with Bruce's presentation.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 6, 2008 Subject: Is data classification the right approach to pursue a risk based information security program? Blog: Information Securitydata classification has most frequently been associated with disclosure countermeasures.
risk-based information security program would really involve detailed
threat & vulnerability analysis, ... decade old post re thread between
risk management and information security
https://www.garlic.com/~lynn/aepay3.htm#riskm
then, if the threat/vulnerability is information disclosure ... a security proportional to risk analysis can be performed ... then disclosure countermeasures proportional to risk can be specified and the data may be given classification corresponding to the necessary disclosure countermeasures.
there is the security acronym PAIN
P ... privacy (or sometimes CAIN and confidentiality)
A ... authentication
I ... integrity
N ... non-repudiation
however, in this answer related to security breaches ... a solution is
discussed which effectively eliminates requirement for
privacy/confidentiality ... with the application of strong
authentication and integrity (eliminating any requirement to prevent
information disclosure)
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243464-24494306
https://www.garlic.com/~lynn/2008i.html#42 Security Breaches
One of the other things we had done was co-author for the financial industry privacy standard (x9.99) ... part of which involved studying privacy regulations in other countries ... as well as meeting with some of the HIPAA people (looking at situations where medical information can leak from financial aspect ... like financial statement listing specific medical procedure or treatment).
We also did a different kind of classification for one of the financial sectors ... asserting that most data classification approaches have simplified information to the point were it involves just the degree of protection. we asserted that potential much better countermeasures might be achieved if the original threat/vulnerability assessment was retained for each piece of information ... traditional disclosure countermeasures tend to be limited to degree that information is hidden. Knowing the actual threat/vulnerability for each piece of information could result in much better countermeasures.
As an example, we would point to what we did in the x9.59 financial
standard where we eliminated the threat/vulnerability from the
majority of breaches that have been in the news ... x9.59 didn't
address preventing the breaches ... x9.59 eliminated the ability of
attackers to use the information for fraudulent transactions.
https://www.garlic.com/~lynn/x959.html#x959
a little x-over from question about definition of risk assessment
vis-a-vis threat assessment
http://www.linkedin.com/answers/finance-accounting/risk-management/FIN_RMG/247411-23329445
https://www.garlic.com/~lynn/2008i.html#60
taken from my merged security taxonomy & glossary
https://www.garlic.com/~lynn/index.html#glosnote
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 7, 2008 Subject: The Price Of Oil --- going beyong US$130 a barrel Blog: Energy and DevelopmentLate last week, an economist was on business show talking about price of oil ... seemingly having difficulty focusing on changing landscape caused by globalization vis-a-vis traditional domestic economic price & demand forces. It seems like past experience won out and so ended with observation that increasing prices will dampen demand which will result in prices coming back down.
This somewhat ignores the new dynamics that global demand has increased significantly and the value of the dollar has fallen. Europe could be paying the equivalent of $100/barrel; dollar declines; Europe continues to pay the same per barrel (in Euros) ... but the US now has to pay $150/barrel ... just to stay even/compete with the Europeans (effectively the price in Euros hasn't changed and so there isn't any corresponding dampening effect on European demand).
Secondary effects would be that there is some additional price elasticity in Euros ... i.e. Europeans could afford to increase their bid for scarce resource by say 20 percent (which would translate into $180/barrel in dollars); Europeans would only see a 20 percent increase in price while US could see an overall 80 percent increase in price (this comparison applies to several world economies, not just Europe).
The increasing price and demand would normally result in increased production. However, there was recent observation about the interaction between retiring baby boomers and oil production projects. The claim was that oil production projects take 7-8 yrs to complete, but with the advent of the retiring baby boomers, there is a shortage of experienced people for all the possibly projects. The claim is that the number of projects to bring additional oil resources online is only about 50 percent of expected (because of lack of skill and experience resulting from retiring baby boomers).
recent blog entry
https://www.garlic.com/~lynn/2007q.html#42
quoting business news channel program that in 2005, oil projects were underfunded by 1/3rd which leads to 1m barrel/day production shortfall in 2010-2011. There is 7-8yr lag to develop new oil production sources and 1/2 of the production project specialists reach retirement over the next 3 yrs (which is claimed to be limiting factor on the number of active projects).
another related blog entry
https://www.garlic.com/~lynn/2008h.html#3
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sun, 08 Jun 2008 13:46:23 -0400"Joe Morris" <j.c.morris@verizon.net> writes:
original fast redispatch didn't reload floating point registers ... since kernel didn't use them ... and assumed that the values hadn't been changed during path thru the kernel.
I had also done much of kernel multiprocessor support ... in fact, had it installed internally on HONE system and some other insstallations. however, the decision to "ship" multiprocessor support in the product wasn't done until after i shipped the resource manager.
this created a number of problems.
the 23jun69 unbundling announcement started charging for software
(somewhat in response to various litigation ... including by the gov)
... however the case was made that the kernel software should still be
free.
https://www.garlic.com/~lynn/submain.html#unbundle
however, by the time of my resource manager ... things were starting to move in the direction of also charging for kernel software (might be considered in part motivated by clone mainframes) ... and my resource manager was chosen to be the guinea pig ... as a result, i got to spend a bunch of time with business & legal people working on policy for kernel software charging.
during the transition period ... one of the "policies" was that "free
kernel" couldn't have as prerequisite "charged-for" kernel software. I
had included quite a bit of multiprocessor kernel reorganization in the
resource manager (w/o including any explicit multiprocessor support).
The problem then became releasing "free kernel" multiprocessor support
that required the customer to also "buy" the resource manager (to get
the multiprocessor kernel reorganization). The eventual decision was
made to remove about 90 percent of the code from the resource manager
(w/o changing its price) and migrating it into the "free" kernel. Lots
of posts mentioning multiprocessor support and/or compare&swap
instruction
https://www.garlic.com/~lynn/subtopic.html#smp
The "big" problem in the OCO time-frame ... was that there was
significant redo of the kernel multiprocessor support ... primarily
oriented towards improving TPF performance on 3081 multiprocessor. some
recent posts mentioning TPF:
https://www.garlic.com/~lynn/2008.html#29 Need Help filtering out sporge in comp.arch
https://www.garlic.com/~lynn/2008g.html#14 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#38 American Airlines
the issue was that TPF didn't have multiprocessor support ... and the company had initially decided that there wouldn't be a non-multiprocessor 308x machine. That met to run TPF on 308x machine ... it had to run under VM370. Furthermore, if TPF was the primary workload, only one of the processors would be busy (unless multiple TPF virtual machines ran). The issue was that majority of virtual machine kernel executation (for a specific virtual machine) tended to be serialized. 100percent busy of all processors was achieved by having multiple (single processor) virtual machines.
To improve TPF thruput there was rework of the kernel multiprocessor support to try and achieve overlapped emulation with TPF execution (i.e. like i/o emulation going on in parallel with TPF execution as opposed to strictly serialized). This included significant increase in cross-processor signaling, handshaking, and lock interference. As a result, nearly all the non-TPF multiprocessor customers saw 10-15 percent thruput degradation (for a small increase in overlapped execution and thruput for the TPF customers). I can also believe that in this rework, they flubbed the fast (re)dispatch.
Eventually, the company decided to announce & ship a single processor 308x machine ... the 3083 ... primarily for ACP/TPF customers. After some additional delay, TPF eventually got around to shiping its own multiprocessor support.
for a different transient failure story ... was one i heard about berkeley cdc6600 ... it was something like tuesday mornings at 10am the machine would thermal shutdown. Eventually they worked out that tuesday mornings was when they watered the grass around the bldg and 10am was class break that would result in large number of flushing going on in restrooms. The combination resulted in loss of water pressure and the thermal overload.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 8, 2008 Subject: I am trying to find out how CPU burst time is caluculated based on which CPU scheduling algorithms are created ? Blog: Computers and SoftwareAs undergraduate in the 60s, i created dynamic adaptive scheduling that was used in cp67 and I later used in my resource manager product shipped for vm370. My dynamic adaptive scheduling supported a number of different resource allocation policies ... including "fair share". In the 70s, this was also frequently referred to as the "wheeler" scheduler.
The size of the CPU burst was adjustable and used to tailor responsiveness ... trade-off between things like responsiveness of the task being scheduled, other tasks in the system, cache-hit ratio (execution continue for long enuf period to recover cost of populating processor cache) ... and whether or not preemption was active.
One of the things in the 60s & 70s was there was frequently implementations that would confuse size of CPU burst and total resource consumption (one of the things that dynamic adaptive scheduling did was treat size of CPU burst and total resource consumption as independent optimization).
Lots of past posts regarding dynamic adaptive scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
recent post discussing some interaction between resource manager,
multiprocessor support and charging for kernel software:
https://www.garlic.com/~lynn/2008i.html#57
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft versus Digital Equipment Corporation Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Mon, 09 Jun 2008 10:13:37 -0400krw <krw@att.bizzzzzzzzzz> writes:
there has been some amount of patent activity as defensive action ... in case claims about prior art haven't proved sufficient.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 9, 2008 Subject: Threat assessment Versus Risk assessment Blog: Risk Managementfrom my merged security taxonomy and glossary
one of the definitions of "risk" (from nist 800-60):
The level of impact on organizational operations (including mission,
functions, image, or reputation), organizational assets, individuals,
other organizations, or the Nation resulting from the operation of an
information system given the potential impact of a threat and the
likelihood of that threat occurring.
....
so a risk is the impact on the organization of a threat.
see taxonomy/glossary for more ...
definition of risk assessment (from nist 800-30):
The process of identifying the risks to system security and
determining the probability of occurrence, the resulting impact, and
additional safeguards that would mitigate this impact.
... and
threat assessment (from gao report 0691):
The identification and evaluation of adverse events that can harm or
damage an asset. A threat assessment includes the probability of an
event and the extent of its lethality. Threats may be present at the
global, national, or local level.
....
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 9, 2008 Subject: Could you please name sources of information you trust on RFID and/or other Wireless technologies? Blog: WirelessA lot of RFID was originally targeted for inventory applications (i.e. as well as EPC enhancement to universial product code, laser scanned barcodes) ... static data that could be easily harvested.
An issue when the same/similar technology is used for transaction operations ... and becomes vulnerable to evesdropping and/or similar kinds of threats.
In the mid-90s, had been working on chips for x9.59 financial
transaction standard
https://www.garlic.com/~lynn/x959.html#x959
one of the threats addressed by x9.59 was to make it immune from evesdropping and havesting attacks ... aka it didn't do anything to eliminate the attacks ... it just made the information useless to the crooks for the purpose of performing fraudulent transactions (I've also discussed this in various answers regarding eliminating the threat from breaches).
we had semi-facetiously been commenting that we would take a $500 milspec part, aggressive cost reduction by 2-3 orders of magnitude, while increasing its security.
we were approached by some of the transit operations with a challenge to also be able to implement it as a contacless chip ... being able to perform an x9.59 transaction within the transit gate power and timing requirements (i.e. contactless chip obtaining power from the radio frequency and executing the operation in the small subsecond time constraints required for transit gate operation).
Some amount of this shows up in the AADS chip strawman patent
portfolio
https://www.garlic.com/~lynn/x959.html#aads
in the 90s, one of the EPC (and aads chip strawman) issues was aggressive cost reduction. Basically wafers have fixed manufacturing costs, so chip cost is related to the number of chips that can be obtained from a wafer. A limitation a decade ago was the technology to cut (slice&dice) chips from wafer was taking more (wafer) surface area than (ever shrinking) chips.
A lot of the current churn regarding RFID technologies is attempting to use it in applications requiring confidentiality and/or privacy (using a technology that could be easily evesdropped for applications that have an evesdropping vulnerability).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@xxxx> Date: Mon, 09 Jun 2008 18:07:12 -0400 Subject: Re: Ransomware MailingList: cryptoJohn Ioannidis wrote:
In the early 90s, when glasshouse and mainframes seeing significant downturn in their use ... with lots of stuff moving off to PCs, there was a study that half of the companies that had a disk failure involving (business) data that wasn't backed up ... filed for bankruptcy within 30 days. The issue was that glasshouse tended to have all sorts of business processes to backup business critical data. Disk failures that lost stuff like billing data had significant impact on cash flow (there was also case of large telco that had bug in its nightly backup and when the disk crashed with customer billing data ... they found that there didn't have valid backups).
Something similar also showed up in the Key Escrow meetings in the mid-90s with regard to business data that was normally kept in encrypted form ... i.e. would require replicated key backup/storage in order to retrieve data (countermeasure to single point of failure). part of the downfall of key escrow was that it seem to want all keys ... not just infrastructure where business needed to have replicated its own keys.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: DB2 25 anniversary Newsgroups: alt.folklore.computers Date: Tue, 10 Jun 2008 09:33:50 -0400IBM DB2's 25th Anniversary: Birth Of An Accidental Empire
from above:
Saturday June 7 was the 25th anniversary of DB2. Ingres and Oracle
preceded it as commercial products by a narrow margin, but the launch of
DB2 on June 7, 1983, marked the birth of relational database as a
cornerstone for the enterprise
... snip ...
some old posts mentioning original relational/sql implementation,
System/R
https://www.garlic.com/~lynn/submain.html#systemr
System/R technology transfer was to endicott for sql/ds ... a few yrs ago, one of the people on the endicott end of the technology transfer had his 30yr corporate anniversary and I was asked to contribute. I put together a log of email exchange with him from the sql/ds technology transfer period.
this old post mentioning some people at a meeting in Ellison's
conference room
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15
I've periodically mentioned that two of the people in the meeting show
up in a small client/server startup responsible for something called
the commerce server. we were called in to consult because they wanted
to do payment transactions on the server. They had this technology
called SSL they wanted to use and the result is now frequently referred
to as electronic commerce ... some references
https://www.garlic.com/~lynn/subnetwork.html#gateway
one of the other people (mentioned in the same meeting) claimed to have handled most of the technology transfer from Endicott to STL for DB2.
for additional drift, some recent posts mentioning tribute to Jim Gray
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#54 Trusted (mainframe) online transactions
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: DB2 25 anniversary: Birth Of An Accidental Empire Date: Tue, 10 Jun 2008 09:42 Blog: The Greater IBM ConnectionIBM DB2's 25th Anniversary: Birth Of An Accidental Empire
from above:
Saturday June 7 was the 25th anniversary of DB2. Ingres and Oracle
preceded it as commercial products by a narrow margin, but the launch of
DB2 on June 7, 1983, marked the birth of relational database as a
cornerstone for the enterprise
... snip ...
some old posts mentioning original relational/sql implementation,
System/R
https://www.garlic.com/~lynn/submain.html#systemr
System/R technology transfer was to Endicott for SQL/DS ... a few yrs ago, one of the people on the Endicott end of the technology transfer had his 30yr corporate anniversary and I was asked to contribute. I put together a log of email exchange with him from the SQL/DS technology transfer period.
This old post mentioning some people at a meeting in Ellison's
conference room
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15
I've periodically mentioned that two of the people in the meeting show
up in a small client/server startup responsible for something called
the commerce server. we were called in to consult because they wanted
to do payment transactions on the server. They had this technology
called SSL they wanted to use and the result is now frequently referred
to as electronic commerce ... some references
https://www.garlic.com/~lynn/subnetwork.html#gateway
one of the other people (mentioned in the same meeting) claimed to have handled most of the technology transfer from Endicott to STL for DB2.
for additional drift, some recent posts mentioning tribute to Jim Gray
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#54 Trusted (mainframe) online transactions
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 10, 2008 Subject: Is the credit crunch a short term aberation Blog: Risk ManagementA few issues
CDOs were used two decades ago during the S&L crisis to obfuscate the underlying value and unload questionable properties.
In the past, loan originators had to pay some attention to loan quality. For the past several years, loan originators have used toxic CDOs to unload their loans w/o having to pay any attention to quality ... their only limitation was how many loans could they originate (w/o having to pay attention to quality).
Institutions buying toxic CDOs, effectively also didn't pay any attention to quality; they could buy a toxic CDO, borrow against the full value, and buy another ... repeating this 40-50 times ... aka "leveraging" with very small amount of actual capital. A couple percent fall in toxic CDO totally wipes out the investment. This supposedly was contributing factor in the crash of '29 where investors had as little as 20percent (compared to current situation with maybe 1-2percent).
When the problems with toxic CDO value started to perculate up ... it became something like consumer product contamination ... toxic CDOs were too good at obfuscating the underlying value ... not all of the toxic CDOs had significant value problems ... but it was nearly impossible to tell which were good and which were bad ... so there was a rush to dump all toxic CDOs.
Once the current crisis settles out ... things aren't likely to return to the previous free wheeling days with no attention to loan quality and enormous leveraging .. recent article from today
HSBC says excessive bank leverage model bankrupt
http://www.reuters.com/article/rbssFinancialServicesAndRealEstateNews/idUSL1014625020080610
long winded, decade old post discussing some of the current problems
...including the need for visibility into underlying value in CDO-like
instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 10, 2008 Subject: How do you manage your value statement? Blog: Change Managementrelated answer to this change management question
old post with some extracts from fergus/morris book discussing effects
in the wake of future system project failure
https://www.garlic.com/~lynn/2001f.html#33
where things became much more rigid, structured, oriented towards maintaining the status quo and resisting change.
it didn't help that I sponsored Boyd's briefings in the corporation
... lots of past Boyd references
https://www.garlic.com/~lynn/subboyd.html
part of Boyd's message ... embodied in OODA-loop metaphor was not only agility and adaptability but being able to do it much faster than your competition.
....
big portion of oursourcing has been about getting sufficient skills ... not just the money. We looked at educational competiveness in the early 90s. When we interviewing in that period ... all of the 4.0 students from top univ. were foreigners and many under obligations to return home after working in the US 5-8 yrs. Half the technical PHDs from top univs. were foreigners ... we've claimed the internet bubble wouldn't even had been possible w/o all those highly skilled foreigners.
the other example we've used is Y2K remediation happening at the same time as internet bubble. Lots of businesses were forced to outsource nuts&bolts business dataprocessing because so many were flocking to the internet bubble. They were forced into that oursourcing ... not because of salary differential ... but in order to get anybody to do the work. After the trust relations were established (sort of forced by not being able to get the skills anywhere else) ... the outsourcing work continued. After the internet bubble burst ... was when people started complaining about all these jobs had gone overseas ... but they weren't complaining in the middle of the bubble.
US educational system now ranks near the bottom of industrial nations
... which is contributing to the jobs moving as much as the salary
differential. recent posts on the subject:
https://www.garlic.com/~lynn/2007j.html#58 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM?
https://www.garlic.com/~lynn/2007u.html#78 Education ranking
https://www.garlic.com/~lynn/2007u.html#80 Education ranking
https://www.garlic.com/~lynn/2007u.html#82 Education ranking
https://www.garlic.com/~lynn/2007v.html#10 About 1 in 5 IBM employees now in India - so what ?
https://www.garlic.com/~lynn/2007v.html#16 Education ranking
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007v.html#20 Education ranking
https://www.garlic.com/~lynn/2007v.html#38 Education ranking
https://www.garlic.com/~lynn/2007v.html#39 Education ranking
https://www.garlic.com/~lynn/2007v.html#44 Education ranking
https://www.garlic.com/~lynn/2007v.html#45 Education ranking
https://www.garlic.com/~lynn/2007v.html#51 Education ranking
https://www.garlic.com/~lynn/2007v.html#71 Education ranking
https://www.garlic.com/~lynn/2008.html#52 Education ranking
https://www.garlic.com/~lynn/2008.html#55 Education ranking
https://www.garlic.com/~lynn/2008.html#60 Education ranking
https://www.garlic.com/~lynn/2008.html#62 competitiveness
https://www.garlic.com/~lynn/2008.html#81 Education ranking
https://www.garlic.com/~lynn/2008.html#83 Education ranking
https://www.garlic.com/~lynn/2008b.html#13 Education ranking
https://www.garlic.com/~lynn/2008c.html#56 Toyota Beats GM in Global Production
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 10, 2008 Subject: How do you manage your value statement? Blog: Change Managementre:
and:
http://www.linkedin.com/answers/management/change-management/MGM_CMG/248432-3786937
When I was an undergraduate ... I was brought in to help get Boeing Computer Services going. Computing facilities had been treated purely as overhead/expense item ... dataprocessing was starting to be viewed as competitive advantage ... and moving it into its own line of business gave it some semblance of having P&L responsibility. 747 serial #3 was flying skies of Seattle getting certification. tour of the 747 mockup included the statement that the 747 would carry so many people that 747s would be served by a minimum of four jetways.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 11, 2008 Subject: Do you have other examples of how people evade taking resp. for risk Blog: Change Managementre:
business school article that mentions responsibility for current credit crisis
http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933 (gone 404 and/or requires registration)
above article apparently was only freely available for 1st 30 days after publication.
a couple quotes from the article posted here (along with several other
refs)
https://www.garlic.com/~lynn/2008g.html#32
the business school article includes comments that possibly 1000 were responsible for the current credit crunch and it would go a long way towards fixing the problem if the gov. could figure out how they could loose their jobs.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: EXCP access methos Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 11 Jun 2008 17:48:16 -0400DASDBill2 writes:
channels run with "real" data transfer addresses. virtual machine (and "VS" system application EXCP) channel programs have virtual address.
CCWTRANS scanned the virtual machine channel program ... creating a "shadow" copy of the virtual machine channel program ... fetching/fixing the related virtual addresses ... and replacing the virtual addresses with real addresses.
The original translation of os/360 to virtual storage operation included crafting a copy of (cp67's) CCWTRANS into the side of VS2 ... to perform the equivalent function of EXCP channel programs (whether application or access methods). VS2 (SVS & then MVS) has had the same problem with access methods (and other applications) creating channel programs with "virtual" addresses ... and then issuing EXCP. At that point, EXCP processing has the same "problem" as virtual machine emulation ... translating channel programs built with virtual addresses into shadow copy that has "real" addresses.
EXCPVR was introduced to indicate that a channel program with "real"
addresses was being used (rather than traditional EXCP channel program).
A discussion of EXCPVR:
http://publib.boulder.ibm.com/infocenter/zos/v1r9/topic/com.ibm.zos.r9.idas300/efcprs.htm#efcprs
disk seek channel commands ... for virtual machine non-full-pack minidisks would have also result in a "shadow" made of the seek argument ... adjusting it as appropriate (i.e. a minidisk could be for 30 cyls starting at real cylinder 100 ... the shadow would have cylinder numbers adjusted by 100 ... unless it attempted to access more than 30 cyls ... which would result in shadow being adjusted to an invalid cylinder number).
OS360 used 3 channel command prefix ... "SEEK", followed by "set file mask" command and then "TIC" (transfer/branch) to the channel program referenced by EXCP (didn't need to scan/translate the passed channel program ... just position the arm and then prevent the passed channel program from moving the arm again.
There was a version of CP67 that was converted to run on 370s ("CP67-I" system) ... which was used extensively inside IBM pending availability of VM370 product. In the morph of CP67 to VM370 product, the CCWTRANS channel program translation routine became DMKCCW.
past posts mentioning VS2 effort started out by crafting cp67 CCWTRANS
to get channel program translation for EXCP:
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2005q.html#41 Instruction Set Enhancement Idea
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006j.html#5 virtual memory
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
https://www.garlic.com/~lynn/2006o.html#27 oops
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007e.html#46 FBA rant
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question
https://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation
https://www.garlic.com/~lynn/2007n.html#35 IBM obsoleting mainframe hardware
https://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation
https://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up
https://www.garlic.com/~lynn/2007s.html#2 Real storage usage - a quick question
https://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: EXCP access methos Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 11 Jun 2008 18:15:30 -0400Anne & Lynn Wheeler <lynn@garlic.com> writes:
an early use of the internal network was distributed development project between the science center and endicott.
the internal network technology was created at the science center
(as well as cp67, gml, lots of other stuff)
https://www.garlic.com/~lynn/subtopic.html#545tech
the internal network was larger than the arpanet/internet from just
about the beinning to possibly mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet
the 370 virtual memory hardware architecture was well specified ... and endicott approached the science center about providing 370 virtual machine support for early software testing ... i.e. in addition to providing 360 and 360/67 virtual memory emulation ... cp67 would be modified to also provide option for 370 and 370 virtual memory emulation.
the original cms multi-level source maintenance system was developed as part of this effort (cms & cp67 had source maintenance but was single level "update").
part of the issue was that this would run on the science center cp67 time-sharing system which including access by numerous non-employees (many from various educational institutions in the cambridge/boston area). 370 virtual memory was a closely held corporate secret and so there had to be a lot of (security) measures to prevent it being divulged.
the basic cambridge cp67 time-sharing system ran "CP67-L".
eventually, in a 360/67 virtual machine, a "CP67-H" kernel ran which had the modifications to provide 370 virtual machines as an option. This provided isolation, preventing the general time-sharing users from being exposed to any of the 370 features.
then a set of updates were created that modified the CP67 kernel to run on 370 "hardware" .... a "CP67-I" kernel would then run in a 370 virtual machine provided by a "CP67-H" kernel running in a 360/67 virtual machine.
CP67-I was in regular operation a year before the first engineeing 370 machine with virtual memory hardware was working. In fact, CP67-I was used as a test case when that first engineering machine became operational.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 12, 2008 Subject: Next Generation Security Blog: TelecommunicationsI gave a graduate student seminar at ISI/USC in '97 (including ISI rfc-editor group and some e-commerce groups) about "internet" not being business critical technology.
It was somewhat based on our much earlier work for our availability ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
where we had done a detailed threat and vulnerability study of tcp/ip and the internet.
We had used that information when we were asked to consult with small
client/server startup that wanted to do payment transactions on their
server and had this technology they had invented called SSL that they
wanted to use ... that work is now frequently referred to as
electronic commerce. As part of deploying the payment gateway for
processing the transactions ... we had to do a large number of
compensating procedures (and countermeasures) ... not just for
strictly (traditional) security purposes ... but availability and
integrity also.
https://www.garlic.com/~lynn/subnetwork.html#gateway
We somewhat later formulized some of this as parameterised risk
management that shows up in the aads patent portfolio
https://www.garlic.com/~lynn/x959.html#aads
that supports a risk management framework that can support dynamically adapting across a large number of different changing circumstances as well as adapting over time.
slightly related answer involving working on categorizing threats and
vulnerabilities:
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243460-21457240
https://www.garlic.com/~lynn/2008i.html#43 IT Security Statistics
For the use of SSL between the server and the payment gateway ... we
had "sign-off" over implementation deployment and could mandate some
number of compensating processes. However, we weren't allowed similar
control over the browser/server interface. Shortly after deployment
... we made facetious comments about SSL being "comfort" mechanism (as
opposed to security mechanism) ... lots of past posts on the subject
https://www.garlic.com/~lynn/subpubkey.html#sslcert
the biggest use of SSL in the world today is for this thing called electronic commerce to "hide" account numbers and transaction details.
In the mid-90s, the X9A10 financial standard working group had been
given the requirement to preserve the integrity of the financial
infrastructure for all retail payments ... and came up with the x9.59
financial standard
https://www.garlic.com/~lynn/x959.html#x959
Part of the x9.59 financial standard was to eliminate the vulnerability associated with divulging account numbers and transaction details ... slightly tweaking the existing paradigm. With it no longer necessary to hide the account numbers, the problems with the majority of the security breaches in the news goes away (doesn't stop the breaches, just eliminates any resulting fraudulent transactions). Since the information no longer has to be hidden, it also eliminates the major use of SSL in the world today.
A lot of time there are security professionals adding patches on top of an (frequently faulty) infrastructure w/o really understanding the underlying fundamentals. In fact, nearly by definition, any infrastructure requiring frequent patches implies fundamental infrastructure flaws (a simple analogy is that there frequently are regulations about NOT being able to use patched tires in commercial operations).
there was a great talk by Bruce Lindsay at the recent tribute for Jim Gray
... where he explains that Jim's work on formalizing transactions was
the real enabler for online transactions (being able to "trust"
electronic processing in lieu of manual/paper operations). lots of
past posts referencing the period
https://www.garlic.com/~lynn/submain.html#systemr
some recent posts referencing the podcasts from the tribute:
https://www.garlic.com/~lynn/2008i.html#50
https://www.garlic.com/~lynn/2008i.html#51
https://www.garlic.com/~lynn/2008i.html#54
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 12, 2008 Subject: The End of Privacy? Blog: Information Securitysome of the issue has been confusing authentication and identification.
in most situations where to verify that an entity is allowed to do something, it is possible to implement authentication (that doesn't require divulging personal information). however, because of the frequent confusion about the difference between authentication and identification ... there is a fall-back to requiring identification (rather than authentication) ... which involves divulging some level of personal information.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Outsourcing dilemma or debacle, you decide... Newsgroups: bit.listserv.ibm-main Date: Thu, 12 Jun 2008 11:35:37 -0400howard.brazee@CUSYS.EDU (Howard Brazee) writes:
... back in the days of having to walk ten miles to school, barefoot in the snow ... uphill both ways.
slightly related post in this blog ... that drifted over into
outsourcing:
https://www.garlic.com/~lynn/2008i.html#65 How do you manage your value statement?
https://www.garlic.com/~lynn/2008i.html#66 How do you manage your value statement?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 12, 2008 Subject: Should The CEO Have the Lowest Pay In Senior Management? Blog: Information SecurityBusiness news channel had a recent editorial statement that in the past, the ratio of US executive pay to worker pay was 20:1 ... they observed that it is currently 400:1 and totally out of control. By comparison in other industrial countries it runs more like 10:1.
Another recent news article said that during four yr period in the run up to the current credit crunch .... wall street paid out over $160 billion in bonuses (some implication was that it was essentially part of the $400billion to $1trillion in current write-down losses ... claim a profit for a bonuses ... which some years later actually turns out to be loss).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 12, 2008 Subject: Should The CEO Have the Lowest Pay In Senior Management? Blog: Information SecurityOn 6/12/08 9:03 AM, John Taylor wrote:
John Boyd ... in his briefings on the organic design for command and
control ... use to give a different explanation. lots of past posts
and/or URL references from around the web:
https://www.garlic.com/~lynn/subboyd.html
He claimed that on the entry into WW2, US had to deploy a huge number of quickly trained and inexperienced people. In order to leverage the scarce skilled resources ... they created a tightly controlled and extremely rigid command and control infrastructure. Then things roll forward a few decades and these former young officers (getting their indoctrination in how to run a large organization) started to permeate the upper ranks of commercial institutions ... and began to change the whole flavor of how large (commercial) organizations were run (reflecting their training in ww2 as young officers) ... changing the whole culture into assumption that only the top officers know what they were doing ... and essentially everybody else in the organization was totally unskilled.
So which is cause and which is effect? ... the belief that they are the only ones that know what they are doing ... justifies the enormous compensation .... or the enormous compensation justifies treating everybody else like they don't know what they are doing.
one of the other things Boyd did was give advice to up & coming youngsters that they needed to choose a career path ... either "DO something" or "BE somebody"; BE somebody could lead to positions of distinction and power, while choosing to "DO something" could put you in opposition to those in "power" and result in reprimands. This didn't endure him to the Air Force brass ... and recently the SECDEF was advising young officers to be more like Boyd (which is presumed to have really angered the Air Force brass).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Outsourcing dilemma or debacle, you decide... Newsgroups: bit.listserv.ibm-main Date: Thu, 12 Jun 2008 14:34:57 -0400billwilkie@HOTMAIL.COM (Bill Wilkie) writes:
Boyd OODA-loop would say that it got too rigid and structured
... including too many people with vested interests in not changing.
OODA-loop metaphor focuses on agile, adaptibility and change
https://www.garlic.com/~lynn/subboyd.html
... I would assert that it isn't "too expensive" per se ... but too rigid and unable to adapt. Vested interests are likely to throw up lots of road blocks to change ... making things more complicated (and also expensive). Frequently KISS is more conducive to being inexpensive, agile, and adaptable (and also viewed as threat to vested interests).
There is some claim that somewhat happened in the wake of the failed
future system project ... old post that includes comments from
fergus/morris book about the wake left after future system project
failed
https://www.garlic.com/~lynn/2001f.html#33
lots of past posts mentioning failed future system project
https://www.garlic.com/~lynn/submain.html#futuresys
and it took the corporation quite some time to work out of it.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 12, 2008 Subject: Security Awareness Blog: Information Securityrecent article
IT Execs: Our Breaches Are None of Your Business
http://www.darkreading.com/document.asp?doc_id=156297
from above:
Eighty-seven percent of IT decision-makers don't believe the general
public should be informed if a data breach occurs, according to the
study. More than half (61 percent) didn't think the police should be
informed, either.
... snip ...
recent posts discussing background behind breach notification
legislation:
https://www.garlic.com/~lynn/2008i.html#21 Worst Security Threats?
https://www.garlic.com/~lynn/2008i.html#42 Security Breaches
also these Q&A
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/237628-24760462
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243464-24494306
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 14, 2008 Subject: Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure? Blog: Government Policytoxic CDOsc were used two decades ago in the S&L crisis to obfuscate underlying value.
long-winded, decade old post discussing much of the current situation
... including visibility into CDO-like instruments.
https://www.garlic.com/~lynn/aepay3.htm#riskm
it use to be that loan originators retained the loans they originated and therefor had to pay attention to loan quality. with toxic CDOs they could unload all the loans they originated and so all financial "feed-back" controls evaporated and the only measure was how fast they could originate loans and unload them. Side-effect was a whole lot of loans got written w/o regard to whether the people getting the loans were qualified.
Subprime loans were subprime in another sense. They were targeted at first time home owners with no credit history and were therefor lower quality borrowers. Many were also subprime in the sense that the loans had a very low introductory borrowing rate for the first couple years and then became a standard adjustable rate loan. There are some stats that the majority of such loans went to people with credit history and likely not for owner-occupied housing (i.e. speculators that were looking to flip the property before the rate adjusted).
There were a large number of first time home owners that weren't remotely qualified for the house they moved into. However, there appears to be a much larger number of such subprime loans that went to pure speculation.
the 2nd order effects are that they are talking about something like $1 trillion in toxic CDO write downs. The simplified mathematical formula is that $1 trillion was unrealisticly pumped into the loan market ... with a corresponding inflation in the housing prices; implication is that a corresponding deflation adjustment now occurs in housing prices.
Housing prices sensitive to demand ... not only did that $1 trillion unrealistically drive up prices ... but the speculation also tended to create the impression of much larger demand than actual existed (houses being held by non-owner/occupied speculators looking to keep the house for a year or so and then flip it).
with some number of loans at 100% (with no down payment) ... the deflation of housing prices (to realistic levels) results in houses being worth less than the loan.
a few weeks ago one of the business news channel commentators was getting annoyed by Bernanke getting into a rut with constant refrain that new regulations will fix the problem ... and came out with the statement that American bankers are the most inventive in the world and have managed to totally screw up the system at least once a decade regardless of the regulations in effect.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 14, 2008 02:05 PM Subject: Hypothesis #4 -- The First Requirement of Security is Usability Blog: Financial Cryptographyre:
There is also the issue of who might make money off it. I've commented
that ssl server certification authority industry have somewhat backed
DNSSEC ... but it represents a significant catch-22 for them
https://www.garlic.com/~lynn/subpubkey.html#catch22
currently ssl domain name server digital certificates represents binding between domain name and public key. the authoritative agency for domain names is the domain name infrastructure. ssl domain name server digital certificates were (at least partially) justified by perceived vulnerabilities in the domain name infrastructure (the same domain name infrastructure that is the authoritative agency for domain names).
the root trust for domain names is the domain name infrastructure ... so part of DNSSEC could be viewed as improving the integrity of the domain name infrastructure as part of eliminating systemic risk for ssl domain name server digital certificates. This can be achieved by having public key presented as part of registering domain name ... and then future communication with domain name infrastructure needs to be digitally signed ... which can be verified with the previously registered, onfile public key.
This can also be used to reduce the cost of ssl domain name digital certificates. Currently certification authorities require a ssl digital certificate application to include a whole lot of identification information. Then the certification authority has to perform error-prone, expensive and time-consuming identification matching process with the information on file (for the domain name) with the domain name infrastructure.
With an on-file public key, certification authorities can just require that ssl domain name digital certificate applications be digitally signed ... then the certification authority can replace the time-consuming, expensive, and error-prone identification process with a much more reliable and inexpensive authentication process ... verifying the digital signature with the public key on-file with the domain name infrastructure.
the catch-22 for the ssl domain name certification authority industry
1) improvements in integrity of domain name infrastructure mitigates some of original justification for ssl domain name digital certificates
2) if general public can also start doing trusted real-time retrieval of on-file public key ... it further eliminates need for ssl domain name digital certificates as well as general demonstration about not needing digital certificates for trusted public key operations.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS X Finder windows vs terminal window weirdness Newsgroups: alt.folklore.computers Date: Sat, 14 Jun 2008 20:01:13 -0400Peter Flass <Peter_Flass@Yahoo.com> writes:
there were a number of issues here ... the base cms filesystem (when possible) would do 64k-byte reads from filesystem (records of executable had to be allocated sequentlial & contiguously). i did some tricks in the underlying paged mapped support to dynamically adapt how the operation was performed ... if it was a really large executable, large amount of contention for real storage, and very little real storage ... then it would allow things to progress in demand page mode.
If the resources were available, "asynchronous reads" would be queued for the whole executable ... and underlying paging mechanism would reorganize for optimal physical transfer ... and execution could start as soon as the page for execution start was available (even if the rest weren't all in memory). There are some processor cache operations that can work like this (as soon as the requested word is available even if the full cache line isn't). Issue is for large executables ... reverything to 4k demand page operations has huge number of latencies.
however, a lot of cms compilers and applications were borrowed from
os360 ... which had behavior that lots of program image locations
had to be fetched and "swizzled" before execution could begin. lots
of past posts discussing difficulty of patching os360 implementation
paradigm for operation for high performance page-mapped environment.
https://www.garlic.com/~lynn/submain.html#adcon
for other topic drift ... old post about being contacted by people in
the os2 group about adapting stuff that i had done in the 60s and early
70s for os2 implementation:
https://www.garlic.com/~lynn/2007i.html#60 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007l.html#61 John W. Backus, 82, Fortran developer, dies
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Certificate Purpose Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin Date: Sat, 14 Jun 2008 21:34:17 -0400"Vadim Rapp" <nospam@sbcglobal.net> writes:
public(/private) key cryptography is business process where one key (of asymmetric key pair) is kept confidential and never divulged (private key) and the other key (public) is freely distributed.
digital signature is a business process that provides authentication and integrity. the hash of a message is encoded with a private key. subsequently the hash of the message is recalculated and compared with the "digital signature" hash that has been decoded with the corresponding public key. if they are equal, then the message is presumed to not have been modified and was "signed" by the entity in possession of the specific "private key". If the hashes are not equal, then the message has been altered (since "signing") and/or originated from a different entity.
over the years there has been some amount of semantic confusion involving the terms "digital signature" and "human signature" ... possibly because they both contain the word "signature". A "human signature" implies that the person has read, understood, and agrees, approves, and/or authorizes what has been signed. A "digital signature" frequently may be used where a person never even has actually examined the bits that are digitally signed.
a digital certificate is a business process that is the electronic analogy to the letters of introduction/credit for first time communication between two strangers (from sailing ship days and earlier) ... where the strangers have no direct knowledge of each other and/or don't have recourse to information sources about the other entity.
there was work on generalized x.509 identity digital certificates nearly two decades ago. the issues, by the middle 90s, was that most organizations realized that such identity digital certificates, represented significant privacy and liability issues. As a result, there was significant retrenching from the paradigm.
In part, the original scenario was electronic mail from the early 80s, where somebody dialed up their electronic post office, exchanged email and then hung up. There could be significant problem authenticating first time email from total stranger (in this mostly "offline" environment).
Digital certificates had started out with a fairly narrowly defined market ... first time communication between strangers w/o direct knowledge of each other (and/or recourse to information about the other party). Realizing that generalized identity certificates represented significant privacy and liability issues, resulted in retrenching and further narrowing of the target market. The increasing pervasiveness of the internet and online information sources further narrowed their target market and usefulness (since there became lots of alternatives for information about total strangers).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 15, 2008 04:16 PM Subject: Selling Security using Prospect Theory. Or not. Blog: Financial Cryptographyre:
how many times has the refrain been repeated about deficiency with "after market" solutions ... that it has to be built in ... not try to affix it on afterwards (aka automobile safety analogy ... things like seat belts, safety glass, air bags, bumpers, crash impact zone, etc).
however, based on the automobile analogy, there may be some evidence that it only happens with gov. mandates.
the safety/security engineers don't disappear with built in security ... but they tend to disappear from public limelight.
misc. old posts that include raising the aftermarket seat belt analogy
https://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
https://www.garlic.com/~lynn/aadsm16.htm#15 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#40 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#56 Question on the state of the security industry
https://www.garlic.com/~lynn/aadsm19.htm#10 Security as a "Consumer Choice" model or as a sales (SANS) model?
https://www.garlic.com/~lynn/aadsm21.htm#16 PKI too confusing to prevent phishing, part 28
https://www.garlic.com/~lynn/aadsm22.htm#28 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm26.htm#64 Dr Geer goes to Washington
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: parallel computing book Newsgroups: comp.arch Date: Mon, 16 Jun 2008 11:42:54 -0400Stephen Fuld <S.Fuld@PleaseRemove.att.net> writes:
older reference to Greg
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism
referencing little difference in opinion that Greg & I had regarding
work on clusters ... from this exchange
https://www.garlic.com/~lynn/2000c.html#21 Cache coherence [was Re: TF-1]
regarding medusa effort ... old email
https://www.garlic.com/~lynn/lhwemail.html#medusa
and this old post
https://www.garlic.com/~lynn/95.html#13
where the effort was transferred and we were told we couldn't work on anything with more than four processors.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Certificate Purpose Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin Date: Tue, 17 Jun 2008 09:08:35 -0400Michael Ströder <michael@stroeder.com> writes:
basically public key operation is something you have authentication ... i.e. business process that keeps the corresponding private key confidential and never divulged to anybody. verifying digital signature (created by a specific private key) with the corresponding public key ... demonstrates the entity has possession of that "private key" (kept confidential and never divulged to anybody).
as mentioned, digital certificate is the electronic version of the ancient letters of credit/introduction ... indicating something about the entity associated with something you have authentication for first time communication between two strangers (who have no other access to information about each other, either locally and/or in an online environment).
we had been called in to consult with a small client/server startup that wanted to do payment transactions on their server and they had invented this thing called SSL that they wanted to use as part of the process. as a result we had to do detailed business walkthru of the SSL process as well as these new operations calling themselves certification authorities ... and these things they were calling digital certificates.
we had signoff/approval authority on the operation between the server
and this new thing called payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
and were able to mandate some compensating procedures. We only had advisery capacity between the servers and clients ... and almost immediately most deployments violated basic SSL assumptions to meet necessary security (which continues up to current day).
In those early days, we were getting comments from certain factions that digital certificates were necessary to bring payment transactions into the modern age. We observed that the use of digital certificates (with their offline design point) actually set online payment transactions back decades (not made them more modern). It was somewhat after a whole series of those interchanges that saw the advent of work on (rube goldberg) OCSP ... which has the facade of providing some of the benefits of online, timely operation while still preserving the archaic offline digital certificate paradigm. The problem with OCSP is that it doesn't go the whole way and just make things a real online, timely operation (and eliminate the facade of needing digital certificates for operation in offline environment). In an online payment transaction scenario, not only is it possible to do real-time lookup of corresponding public key for real-time (something you have) authentication, but also do real-time authorization ... looking at things like current account balance and/or do other analysis based on current account characteristic and/or account transaction activity/patterns.
There were other incidental problems trying to apply digital
certificates (specifically) to payment transactions (other than
reverything decades of real real-time, online operation to a archaic
offline paradigm). After we worked on what is commonly referred to
electronic commerce today (including the SSL domain name digital
certificate part) ... there was some number of efforts to apply digital
certificates to payment transactions ... at the same time we had been
called in to work in the x9a10 financial standard working group (that
had been given the requirement to preserve the integrity of the financial
infrastructure for all retail payments). we came up with x9.59 financial
standard which could use digital signature authentication w/o the need
for digital certificates (i.e. use digital signatures in a real online
mode of operation w/o the trying to maintain any fiction of digital
certificates and offline operation).
https://www.garlic.com/~lynn/x959.html#x959
we would periodically ridicule the digital certificates based efforts
(besides noting that it was attempt to revert the decades of online
operation to an offline paradigm). some of that presumably sparked the
OCSP effort. However, the other thing we noted was that the addition
of digital certificates to payment transaction increased the typical
payload size by a factor of 100* times along with increase in
processing by a factor of 100* times. This was enormous bloat (both
payload and processing) for no useful purpose (digital certificates
were redundant and superfluous compared to having public key on file in the
account record ... which turns out was necessary for other purposes
anyway). misc. past references
https://www.garlic.com/~lynn/subpubkey.html#bloat
we also noted that the primary purpose of SSL in the world today is in
the electronic commerce application and used to hide the account number
and transaction details (as a countermeasure to account fraud flavor of
identity theft). we pointed out that the work on x9.59 had also slightly
tweaked the payment transaction paradigm and eliminated the need to
"hide" the transaction details. From the security acronym PAIN
P ... privacy (sometimes CAIN, confidential)
A ... authentication
I ... integrity
N ... non-repudiation
... in effect, x9.59 substitutes strong authentication and integrity for
privacy as countermeasure to account fraud (flavor of identity theft).
We noted that not only did the x9.59 standard eliminate the major use of
SSL in the world today (hiding the account number and transaction
details) ... but no longer needing to hide that information ... also
eliminates the threats and vulnerabilities with the majority of the data
breaches that have been in the news (doesn't eliminate the breaches,
just eliminated the ability of the attackers to use the information for
fraudulent purposes).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Stephen Morse: Father of the 8086 Processor Newsgroups: alt.folklore.computers Date: Tue, 17 Jun 2008 09:22:09 -0400Stephen Morse: Father of the 8086 Processor
from above:
In honor of the 30th anniversary of Intel's 8086 chip, the
microprocessor that set the standard that all PCs and new Macs use
today, I interviewed Stephen Morse, the electrical engineer who was most
responsible for the chip.
... snip ...
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 17, 2008 Subject: Which of the latest browsers do you prefer and why? Blog: Web DevelopmentI've been using mozilla tab browsing for 5-6 yrs as means of masking/compensating for web latency.
i started out with a tab folder that i could click on and it would fetch 80-100 news oriented web pages (while i got a cup of coffee). I could then quickly cycle thru the (tabbed) web pages ... clicking on interesting articles (which would asynchronously load in the background into new tabs). By the time I had cycled thru all the initial web pages, the specific news articles would have all loaded and be immediately available.
Early on, I would complain about apparent storage cancers and performance problems when there were 500-600 open tabs (machine still had real storage to avoid any paging ... unless this was repeated several times w/o cycling the browser).
About the time firefox moved to sqlite ... i switched to a process that used wget to fetch the initial set of (80-100) news oriented pages ... and do a diff on the previous fetch, and then use sqlite to extact firefox previously seen URLs. This was used to produce a list of "new" URLs (from the web sites) that also had not otherwise been seen previously. I then used command line interface to signal the running firefox to load the list of "new" URLs into background tabs. Most recent firefox builds have improved significantly in both storage utilization and performance (handling opening several hundred URLs into background tabs).
during the evolution of firefox 3 and sqlite use ... there has been some adaptation to things like changes involving serialization and locking on the sqlite file.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Own a piece of the crypto wars Date: Tue, 17 Jun 2008 13:53:59 -0400 To: R.A. Hettinga <rah@xxxxxxxx> CC: cypherpunks@xxxxxxx, cryptography@xxxxxx gold-silver-crypto@xxxxxxxx, dgcchat@xxxxx, Sameer Parekharcheological email about proposal for doing pgp-like public key (from 1981):
the internal network was larger than the arpanet/internet from just about the beginning until sometime summer of '85. corporate guidelines had become that all links/transmission leaving corporate facilities were required to be encrypted. in the '80s this met lots of link encryptors (in the mid-80s, there was claim that internal network had over half of all the link encryptors in the world).
a major crypto problem was with just about every link that crossed any national boundary created problems with both national gov. links within national boundaries would usually get away with argument that it was purely internal communication within the same corporate entity. then there was all sorts of resistance encountered attempting to apply that argument to links that cross national boundary (from just about every national entity).
For other archeological lore ... old posting with new networking
activity for 1983
https://www.garlic.com/~lynn/2006k.html#8
above posting includes listing of locations (around the world) that had one or more new network links (on the internal network) added sometime during 1983 (large precentage involved connections requiring link encryptors).
more recent post
https://www.garlic.com/~lynn/2008h.html#87
mentioning coming to the realization (in the 80s) that there were three kinds of crypto.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Historical copy of PGP 5.0i for sale -- reminder of the war we lost Newsgroups: alt.folklore.computers Date: Tue, 17 Jun 2008 18:44:13 -0400Historical copy of PGP 5.0i for sale -- reminder of the war we lost
there is a number of references to the subject ... i had posted this
to similar thread that is running in the crypto mailing list
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars
regarding crypto on the internal network more than a decade earlier (which is also reproduced in the financial crypto blog).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: squirrels Newsgroups: alt.folklore.computers Date: Wed, 18 Jun 2008 08:47:29 -0400jmfbahciv <jmfbahciv@aol> writes:
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 18, 2008 12:49 PM Subject: Technologists on signatures: looking in the wrong place Blog: Financial Cryptographyre:
couple recent posts in microsoft crypto n.g. thread on "Certificate
Purpose" that got into description of digital signature being
something you have authentication
https://www.garlic.com/~lynn/2008i.html#80
https://www.garlic.com/~lynn/2008i.html#83
and there periodically being semantic confusion with "human signature"
... possibly because both terms contain the word "signature". misc. past
posts about being called in to help wordsmith the cal. state
electronic signature legislation (and later the federal electronic
signature legislation)
https://www.garlic.com/~lynn/subpubkey.html#signature
and the oft repeated statement that "human signatures" have implication of having read, understood, agrees, approves, and/or authorizes (which isn't part of something you have authentication digital signature).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Certificate Purpose Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin Date: Wed, 18 Jun 2008 15:52:59 -0400Paul Adare <pkadare@gmail.com> writes:
... the chip is more secure storage method for the private key. for digital signatures to represent something you have authentication, an established business process has to provide that the private key has never been divulged, kept confidential and any specific private key is only in the possession of a single individual (the chip storage supposedly provides for high integrity and additional assurance that only a single entity has access to & use of the private key).
The public/private key process provides for the public key to be published and widely distributed. Digital certificates are a specific kind of business process for the distribution of public keys.
From a something you have authentication business process requirement for private key ... the chip can provide for a confidential storage method for the private key. The chip may also be used as a convenient storage method for the corresponding public key and any associated digital certificate (but there isn't a security requirement to keep the public key and associated digital certificates confidential ... just the reverse ... the objective is to make copies of them generally available).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Certificate Purpose Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin Date: Wed, 18 Jun 2008 16:13:22 -0400Anne & Lynn Wheeler <lynn@garlic.com> writes:
i.e. re:
https://www.garlic.com/~lynn/2008i.html#90 Certificate Purpose
oh and for a little topic drift ... some recent posts/comments about PGP
which makes use of public/private key infrastructure for secure email
but w/o digital certificates
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of crypto wars
https://www.garlic.com/~lynn/2008i.html#87 Historical copy of PGP 5.0i for sale -- reminder of the ware we lost
it also mentions/references this old email from '81
https://www.garlic.com/~lynn/2006w.html#email810515
in this post
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network
proposing a PGP-like certificate-less public/private key operation for the internal network.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Certificate Purpose Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin Date: Wed, 18 Jun 2008 17:10:39 -0400"David H. Lipman" <DLipman~nospam~@Verizon.Net> writes:
since then several organizations have effectively moved to the position that various kinds of additional business processors &/or services need be used to provide for non-repudiation about *something*.
from my merged security taxonomy and glossary
https://www.garlic.com/~lynn/index.html#glosnote
... a (90s) "GSA" definition for non-repudiation:
Assurance that the sender is provided with proof of delivery and that
the recipient is provided with proof of the sender's identity so that
neither can later deny having processed the data. Technical
non-repudiation refers to the assurance a relying party has that if a
public key is used to validate a digital signature, that signature had
to have been made by the corresponding private signature key. Legal
non-repudiation refers to how well possession or control of the private
signature key can be established.
... snip ...
more recent definition from NIST 800-60:
Assurance that the sender of information is provided with proof of
delivery and the recipient is provided with proof of the sender's
identity, so neither can later deny having processed the information.
... snip ...
or FFIEC:
Ensuring that a transferred message has been sent and received by the
parties claiming to have sent and received the message. Non-repudiation
is a way to guarantee that the sender of a message cannot later deny
having sent the message and that the recipient cannot deny having
received the message.
... snip ...
The current scenarios regarding non-repudiation involve additional business processes and/or services (other than entity something you have digital signatures).
For additional topic drift, one of the non-repudiation vulnerabilities for digital signatures can be a dual-use problem. Digital signatures can be used in a purely (possibly challenge/response) something you have authentication (say in place of password). The server sends random data (as a countermeasure to replay attack), which the client is expected to digital sign (with the appropriate private key). The server then verifies the returned digital signature with the onfile public key (for that account). These scenarios never have the client actually examining the data being digital signed. If the same public/private key pair is also ever used in scenario where the entity is assumed to have actually read (understood, agrees, approves, and/or authorizes) what is being digitally signed ... then an attack is to include other than random data in some challenge/response, something you have authentication (say some sort of payment transaction).
The countermeasure is to guarantee that it is only possible to use a private key for digitally signing of specific kind and that it is physical impossible for a private key to be used for making any other kind of digital signature (for instance, a private key will have knowledge that the hash that is being encoded to form a digital signature is guaranteed to have been of text that has been read & understood by you ... and w/o that knowledge, the private key will refuse to perform the encoding operation).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Certificate Purpose Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin Date: Thu, 19 Jun 2008 15:04:38 -0400"Vadim Rapp" <vr@nospam.myrealbox.com> writes:
recipient is a relying party ... typically in trusted 3rd party certification authority paradigm ... why do you thing the word trusted appears in the press so much?
trusted 3rd party certification authorities have been typically disclaiming responsibility/liability for ages.
so there are actually a number of trust problems.
for a technical trust deficiency, most certification authorities aren't the authoritative agency for the information they are certifying (which is embodied in the digital certificate they issue).
in the case of email, the authoritative agency for email address is typically the associated ISP. so if that ISP doesn't provide any security for passwords ... then some attacker could obtain access to the email. they could then apply for a different digital certificate (with a different public/private key) for the same email address. Now, there is a situation where there may be two (or more) different trusted valid accepted digital certificates for the same email address.
a recipient's countermeasure for this sort of threat is to maintain
local repository of the correct digital certificate. however, that
actually becomes the PGP model ... which only requires the recipient to
maintain local repository of the correct public key ... where digital
certificates are redundant and superfluous.
https://www.garlic.com/~lynn/subpubkey.html#certless
for a business trust deficiency ... parties have responsibility/liability obligations based on explicit or implicit contract. in the trusted 3rd party certification authority business model the contract is between the certification authority and the entity that the digital certificate is issued to. there typically is no implicit, explicit, and/or implied contract between trusted 3rd party certificaiton authorities and the relying parties that rely on the validity of the issued digital certificates ... and therefor no reason for relying parties to trust the digital certificates.
basically the trusted 3rd party certification authority business model doesn't correspond to long established business practices. this is actually highlighted in the federal PKI program ... which has the GSA ... acting as an agent for all federal relying party entities ... signing explicit contracts with the authorized certification authorities ... creating explicit contractual obligation between the relying parties and the trusted 3rd party certification authorities ... providing basis on which trust/reliance can be based.
another approach is the relying-party-only certification authority
information (i.e. the relying party actually issuing the digital
certificate).
https://www.garlic.com/~lynn/subpubkey.html#rpo
the issue here is the certification authority has as part of the business process something frequently referred to as registration ... where the public key is registered (prior to issuing a digital certificate). The original design point for digital certificates is first time communication between two strangers. However, in all the relying-party-only scenarios is normally trivial to also show that the digital certificates are redundant and superfluous ... since the public key is typically registered in the same repository that other information about the subject entity is being kept ... and which is normally accessed in any dealings that the relying party will have with that entity.
as mentioned previously the early 90s, saw work on generalized x.509 identity digital certificates ... but by the mid-90s, most institutions realized that this "identity" digital certificates (frequently becoming overloaded with personal information) represented significant privacy and liability issue. The retrenchment was to relying-party-only digital certificates which would only contain some sort of record locator ... where all the actual information resided. Again it was trivial to show that digital certificates were redundant and superfluous since this record would also contain the associated public key.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Lynn Wheeler <lynn@xxxxxxxx> Date: June 20, 2008 Subject: Lynn - You keep using the term "we" - who is "we"? Blog: Information Security - UKmy wife and I worked together on many of these activities.
for instance, we had done a high-speed data transport project (HSDT)
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and we working with various organizations going forward for
NSFNET. TCP/IP is somewhat the technical basis for modern internet,
NSFNET backbone was the operational basis for the modern internet and
then CIX was the business basis for the modern internet. However,
internal politics got in the way of our bidding on NSFNET
backbone. The director of NSF tried to help by writing a letter to the company 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) (including stating that an audit found what we already had
running internally was at least five years ahead of all NSFNET
backbone bid submissions). But that just made the internal politics
worse. Some old email regarding NSFNET related activities
https://www.garlic.com/~lynn/lhwemail.html#nsfnet
later we ran ha/cmp project that resulted in developing and shipping
the HA/CMP product ... misc. past post
https://www.garlic.com/~lynn/subtopic.html#hacmp
for some tie-in (between high-availability, cluster scale-up,
supercomputers, relational databases and SSL) ... two of the people in
this ha/cmp scale-up meeting in ellison's conference room
https://www.garlic.com/~lynn/95.html#13
show up later at a small client/server startup responsible for something called a commerce server. the startup had invented something called SSL and they wanted to apply it as part of implementing payment transactions on their server. The result is now frequently referred to as electronic commerce.
old email on the cluster scale-up aspect
https://www.garlic.com/~lynn/lhwemail.html#medusa
We recently both attended the Jim Gray tribute a couple weeks ago at
Berkeley. Random other database tidbits ... including working on the
original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr
In this patent portfolio involving security, authentication, access
control, hardware tokens, etc ... we are the co-inventors
https://www.garlic.com/~lynn/aadssummary.htm
and in one of her prior lives ... long ago and far away ... she had
been con'ed into going to POK to serve as the (corporate)
loosely-coupled (aka mainframe for cluster) architect ... where she
was responsible for Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
... which, except for IMS hot-standby, saw very little takeup until SYSPLEX (one of the reasons that she didn't stay very long in the position).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Accidentally Deleted or Overwrote Files? Newsgroups: alt.folklore.computers Date: Fri, 20 Jun 2008 14:47:28 -0400stremler writes:
some recent posts:
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#54 Trusted (mainframe) online transactions
https://www.garlic.com/~lynn/2008i.html#62 Ransomware
https://www.garlic.com/~lynn/2008i.html#63 DB2 25 anniversary
https://www.garlic.com/~lynn/2008i.html#70 Next Generation Security
https://www.garlic.com/~lynn/2008i.html#94 Lynn - You keep using the term "we" - who is "we"?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A Blast from the Past Newsgroups: alt.folklore.computers Date: Sun, 22 Jun 2008 09:59:53 -0400Quadibloc <jsavard@ecn.ab.ca> writes:
old post of science center's 360/67 in the 2nd flr machine room at 545
tech sq (open system, dumping directly into cambridge sewer, 40yrs ago)
and looking at replacing with a closed system ... but a big question was
the weight loading of the water tower on the bldg. roof:
https://www.garlic.com/~lynn/2000b.html#86 write rings
other posts mentioning science center at 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech
things evolved into (multiple levels of) closed systems with heat exchange interfaces (with requirement for very pure liquid for closed system actually circulating thru the machine). there is old folklore about one such early customer installation that had all sorts of sensors that would trip thermal shutdown (avoiding overheating and damage to machine components). this particular problem was that there wasn't a flow sensor on the system next to the machine (there were flow sensors on the internal system) ... by the time the internal thermal sensors started to notice a temperature rise (because flow had stopped in external flow) it was too late ... there was too much heat on the internal side which couldn't be dumped.
misc. posts mentioning (closed system) heat exchange:
https://www.garlic.com/~lynn/2000b.html#36 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2001k.html#4 hot chips and nuclear reactors
https://www.garlic.com/~lynn/2004p.html#35 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#41 IBM 3614 and 3624 ATM's
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: We're losing the battle Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 22 Jun 2008 10:36:09 -0400Robert.Richards@OPM.GOV (Richards, Robert B.) writes:
old post about deploying ha/cmp scale-up before the project
got redirected and we were told to not work on anything
more than four processors
https://www.garlic.com/~lynn/95.html#13
misc. old email regarding ha/cmp scale-up activity
https://www.garlic.com/~lynn/lhwemail.html#medusa
i've frequently commented that (much) earlier, my wife had been
con'ed into going to POK to be in charge of loosely-coupled architecture
where she created Peer-Coupled Shared Data ... misc. past posts
https://www.garlic.com/~lynn/submain.html#shareddata
but, except for IMS hot-standby ... it saw very little take-up until much later with sysplex (and parallel sysplex) activity ... which contributed to her not staying very long in the position.
another issue in that period was that she had constant battles with the communication division over protocols used for the infrastructure. in the early sna days ... she had co-authoried peer-to-peer networking architecture (AWP39) ... so some in the communication division may viewed efforts as somewhat competitive. while she was in POK, they had come to a (temporary) truce ... where communication protocols had to be used for anything that crossed the boundary of the glasshouse ... but she could specify the protocols used for peer-coupled operation within the walls of the glasshouse.
part of the ha/cmp not on mainframe platform was avoiding being limited
by communication division. for some topic drift, other past posts
mentioning conflict with communication division when we came up with
3-tier architecture and were out pitching it to customer executives
https://www.garlic.com/~lynn/subnetwork.html#3tier
recent ha/cmp related post (from thread mentioning tribute to Jim Gray)
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#51 Microsoft versus Digital Equipment Corporation
the first talk at the tribute was by Bruce Lindsay mentioning that Jim's formalizing of transaction semantics was the great enabler for online transactions (providing the necessary trust in computer operation to move off the manual/paper operation).
now related to the meeting mentioned in this referenced post
https://www.garlic.com/~lynn/95.html#13
two of the people mentioned in the meeting, later show up in a small
client/server startup responsible for something called a commerce
server. we were called in to consult because they wanted to do payment
transactions on the server ... and they had this technology that the
startup had invented called SSL which they wanted to use. As part of
doing payment transactions on the server ... there was the creation of
something called a payment gateway that servers would interact with.
lots of past posts mentioning this thing called payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
btw, we used ha/cmp for the payment gateway implementation (with some number of enhancements and compensating procedures). this is now frequently referred to as electronic commerce.
recent post related some other aspects of the period (in an
information security blog)
https://www.garlic.com/~lynn/2008i.html#94 Lynn - You keep using the term "we" - who is "we"?
one of the other things mentioned at the tribute, was Jim's work on
analysing where the majority of outages are happening (frequently cited
study that outages are rairly hardware anymore). when we were out
marketing ha/cmp product, we had coined the terms disaster
survivability and geographic survivability ... to differentiate from
simple disaster/recovery. we were also asked to write a section for the
corporate continuous availability strategy document. however, the
section was removed because both rochester and POK complained that they
wouldn't be able to match (what we were doing) for some number of years
https://www.garlic.com/~lynn/submain.html#available
for other drift, recent post discussing the evolution from medusa
to blades and the really major green enabler was the marrying of
virtualization and blades (as part of server consolidation)
https://www.garlic.com/~lynn/2008h.html#45 How can companies decrease power consumption of their IT infrastructure?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: dollar coins Newsgroups: alt.folklore.computers Date: Sun, 22 Jun 2008 12:47:04 -0400Larry Elmore <ljelmore@verizon.spammenot.net> writes:
recent posts mentioning (former) comptroller general on responsible
budgets (making comment that nobody in congress for the past 50 yrs
has been capable of simple middle school arithmetic).
https://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008d.html#40 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2008f.html#86 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008h.html#26 The Return of Ada
part of it is that just the medicaid drug legislation (by itself) creates tens of trillions in unfunded liability and that various "social" program spending starting to dwarf all other gov. budget items (combined).
one of the (oft repeated) references (from the gov. gao site) shows gov. budget in '66 as 43percent defense & 15percent social security; in '88 28% defense and 20% social security; and in 2006, 20% defense and 21% social security (and 19% medicare & medicaid). in '66, budget was 7% was debt interest, 67% discretionary spending and 26% mandatory spending. in '86, budget was 14% was debt interest, 44% discretionary spending and 42% mandatory. In '06, budget was 9% debt interest, 38% discretionary and 53% mandatory. And by 2040, federal budget debt interest, federal social security and federal medicare/medicaid will be nearly 30% of GDP.
another view of this i've raised is with respect to the baby boomer
retirement ... the significant baby boomer population bubble increases
the number of retirees by something like a factor of four times ... with
the number of workers in the following generation only a little over
half the number of baby boomer workers. The net is that the ratio of
retirees to workers increases by a factor of eight times ... aka, each
worker will have to pay eight times as much to provide same level of per
retiree benefits (there was recent program that mentioned some court
ruled that the IRS isn't allowed to have a tax rate higher than 100%)
https://www.garlic.com/~lynn/2008f.html#99 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008h.html#26 The Return of Ada
other recent comments about baby boomer retirement issues
https://www.garlic.com/~lynn/2008b.html#3 on-demand computing
https://www.garlic.com/~lynn/2008c.html#16 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#69 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008h.html#11 The Return of Ada
https://www.garlic.com/~lynn/2008h.html#57 our Barb: WWII
https://www.garlic.com/~lynn/2008i.html#56 The Price Of Oil --- going beyong US$130 a barrel
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: We're losing the battle Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 22 Jun 2008 13:02:38 -0400R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
working on ha/cmp we looked at customer that required five-nines availability ... five minute outage (planned & unplanned) per year.
on the other hand ... one of the large financial transaction networks has claimed 100% availability over extended number of years ... using triple redundant IMS hot-standby and multiple geographic locations.
slight drift ... recent Information Security blog post
https://www.garlic.com/~lynn/2008i.html#17 Does anyone have any IT data center disaster stories?
made a passing reference in previous post with regard to contention with
the communication division. the tcp/ip mainframe product had significant
performance issues ... consuming nearly a full 3090 processor getting
44kbytes/sec thruput. I enhanced the product with RFC1044 support and in
some tuning tests at Cray research got 1mbyte/sec (hardware limitation)
sustained between a Cray and a 4341-clone (using only a modest amount of
the 4341) ... aka nearly three orders of magnitude increase in the ratio
of bytes transferred per instruction executed. misc. past posts mentioning
rfc1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
another area of conflict ... as part of the hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt
the friday before we were to leave on trip to the other side of the pacific to discuss some custom built hardware for hsdt ... somebody (from the communication division) announced a new online conference in the area of high-speed communication ... and specified the following definitions:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbitsthe following monday on the wall of conference room on the other side of the pacific were these definitions:
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS X Finder windows vs terminal window weirdness Newsgroups: alt.folklore.computers Date: Sun, 22 Jun 2008 15:17:05 -0400Roland Hutchinson <my.spamtrap@verizon.net> writes:
computers running multiple things concurrently has been traditional referred to as multitasking, multiprogramming, concurrent programming, parallel computing, time-slicing, etc. traditional time-sharing systems have been implemented using technologies used for running multiple things concurrently.
for instance, online transaction systems have also tended to run multiple things concurrently ... using multitasking, multiprogramming, concurrent programming, parallel computing, time-slicing, etc technologies ... but usually are differentiated from traditional time-sharing systems.
in that sense, modern webservers tend to have more in common with online transaction systems than traditional time-sharing systems ... although that doesn't preclude deploying a webserver on a traditional time-sharing system (or for that matter online systems).
cp67 and vm370 tended to be deployed as time-sharing systems
... supporting large numbers of different users currently ... and the
first webserver deployed outside of europe (in the US) was on the SLAC
vm370 system. but that webserver was more akin to the current virtual
appliance.
https://ahro.slac.stanford.edu/wwwslac-exhibit
... slac and cern were similar computing operations.
one of the current issues that is frequently raised is the severe lack of adequate parallel (aka concurrent) computing operation ... especially for desktop systems.
online transaction systems and time-sharing systems provided lots of concurrently, independently executable work that can take advantage of multi-core operation .... but it is becoming a significant problem for desktop to take advantage of newer generation of chips where multi-core is standard. in the online transaction systems and time-sharing systems ... extensive concurrent workload (that may have been time-sliced on a single processor) can now actually run concurrently on different cores/processors.
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: We're losing the battle Newsgroups: bit.listserv.ibm-main,alt.folklore.computers To: <ibm-main@bama.ua.edu> Date: Sun, 22 Jun 2008 15:45:48 -0400Efinnell15@AOL.COM (Ed Finnell) writes:
mentioned a post in information security blog. the main part of that particular blog thread was related to majority of the breaches that get in the news (something that PCI has been targeted at addressing).
the thread started out regarding a study that something like 84% of IT managers don't believe they need to comply with breach notification and 61% don't even believe they should notify law enforcement.
parts of the thread is repeated here
https://www.garlic.com/~lynn/2008i.html#21 Worst Security Threats?
after working on what is now frequently referred to as electronic commerce (mentioned earlier in this thread), we were brought into the x9a10 financial standard working group which in the mid-90s, had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. as part of that we did detailed end-to-end, risk, threat, and vulnerability studies. a couple highlights
1) security proportional to risk ... crooks/attackers may be able to outspend defenders 100-to-1. the information for the crooks is basically worth the value of the account balance or credit limit. the information for the merchants is basically worth some part of profit off the transaction. the value of the information to the crooks may be worth 100 times more than the value to the merchants ... as a result, the crooks may be able to outspend 100 times attacking the system. traditional military lore has something like attackers needing 3-5 times the resources to attack a fortified fixed position. potentially being able to marshall 100 times the resources almost guarantees a breach someplace.
2) account number and transaction information has diametrically opposing security requirements ... on one hand the information has to be kept confidential and never used or divulged (countermeasure to account fraud flavor of identity theft). on the other hand, the information is required to be available for numerous business processes as part of normal transaction processing. we've periodically commented that even if the planet was buried under miles of information hiding cryptography, that it still couldn't prevent information leakage.
so one of the things done in x9a10 as part of the x9.59 financial
transaction standard was to slightly tweak the paradigm ... making the
information useless to the attackers. x9a10 & x9.59 didn't address any
issues regarding eliminating breaches ... it just eliminated the
threat/risk from such breaches (and/or information leakage).
https://www.garlic.com/~lynn/x959.html#x959
now the major use of SSL in the world today is that previously mentioned stuff now frequently referred to as *electronic commerce* ... where it is used to hide account number and payment transaction information. The x9.59 financial standard effectively eliminates that SSL use since it no longer is necessary to hide that information (as countermeasure to account fraud form of identity theft).
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS X Finder windows vs terminal window weirdness Newsgroups: alt.folklore.computers Date: Sun, 22 Jun 2008 17:05:00 -0400Peter Flass <Peter_Flass@Yahoo.com> writes:
for even more topic drift, when i was undergraduate, the univ. library
had gotten an ONR grant for library automation and was also selected to
be beta-test for original CICS product release ... and i got tasked to
work on supporting the effort ... even shooting bugs in CICS. misc.
past posts mentioning CICS (and/or BDAM)
https://www.garlic.com/~lynn/submain.html#bdam
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS X Finder windows vs terminal window weirdness Newsgroups: alt.folklore.computers Date: Sun, 22 Jun 2008 21:50:16 -0400re:
for similar description ... wiki time-sharing page
https://en.wikipedia.org/wiki/Time-sharing
both cp67 (4th flr, 545 tech sq) and multics (one flr up) trace back to
ctss. misc. posts mentioning 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech
above article also mentions wiki time-sharing system evolution page
https://en.wikipedia.org/wiki/Time-sharing_system_evolution
the wiki article mentions ncss and tymshare as commercial time-sharing
services as take-off with cp67 & vm370 ... other references to
commercial time-sharing
https://www.garlic.com/~lynn/submain.html#timeshare
it didn't mention IDC which was another commercial cp67 spin-off about
the same time as ncss ... but time-sharing system evoluation page
does have pointer to idc wiki page:
https://en.wikipedia.org/wiki/Interactive_Data_Corporation
for additional drift, cp/cms history wiki page:
https://en.wikipedia.org/wiki/History_of_CP/CMS
--
40+yrs virtualization experience (since Jan68), online at home since Mar70
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: dollar coins Newsgroups: alt.folklore.computers Date: Mon, 23 Jun 2008 16:37:35 -0400Lars Poulsen <lars@beagle-ears.com> writes:
long-winded, decade old post discussing some of the current problems
... including need to have visibility into the underlying value of the
stuff that makeup toxic CDO instruments (rather than
hiding/obfuscating)
https://www.garlic.com/~lynn/aepay3.htm#riskm
business news programs are still claiming that there is $1 trillion inflation in these instruments and so far there has only been about $400b write-downs ... so that there is still $600b possible in write-downs to come.
much of that $1 trillion was pumped into the real-estate market bubble ... simplified assumption is if there is $1 trillion write-down in the valuation of the toxic CDOs ... there is corresponding $1 trillion deflating pressure in the real-estate market bubble.
misc. recent posts mentioning toxic CDOs:
https://www.garlic.com/~lynn/2008.html#66 As Expected, Ford Falls From 2nd Place in U.S. Sales
https://www.garlic.com/~lynn/2008.html#70 As Expected, Ford Falls From 2nd Place in U.S. Sales
https://www.garlic.com/~lynn/2008.html#90 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008b.html#12 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008b.html#75 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#13 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#21 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#63 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#87 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008e.html#42 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008e.html#65 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008e.html#70 independent appraisers
https://www.garlic.com/~lynn/2008f.html#1 independent appraisers
https://www.garlic.com/~lynn/2008f.html#10 independent appraisers
https://www.garlic.com/~lynn/2008f.html#17 independent appraisers
https://www.garlic.com/~lynn/2008f.html#32 independent appraisers
https://www.garlic.com/~lynn/2008f.html#43 independent appraisers
https://www.garlic.com/~lynn/2008f.html#46 independent appraisers
https://www.garlic.com/~lynn/2008f.html#51 independent appraisers
https://www.garlic.com/~lynn/2008f.html#52 independent appraisers
https://www.garlic.com/~lynn/2008f.html#53 independent appraisers
https://www.garlic.com/~lynn/2008f.html#57 independent appraisers
https://www.garlic.com/~lynn/2008f.html#71 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#75 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#77 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#79 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#94 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#4 CDOs subverting Boyd's OODA-loop
https://www.garlic.com/~lynn/2008g.html#11 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#16 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#36 Lehman sees banks, others writing down $400 bln
https://www.garlic.com/~lynn/2008g.html#37 Virtualization: The IT Trend That Matters
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#62 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#64 independent appraisers
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#1 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#28 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#32 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#48 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#49 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#51 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#89 Credit Crisis Timeline
https://www.garlic.com/~lynn/2008h.html#90 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008i.html#4 A Merit based system of reward -Does anybody (or any executive) really want to be judged on merit?
https://www.garlic.com/~lynn/2008i.html#30 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008i.html#64 Is the credit crunch a short term aberation
https://www.garlic.com/~lynn/2008i.html#77 Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure?
--
40+yrs virtualization experience (since Jan68), online at home since Mar70