List of Archived Posts

2006 Newsgroup Postings (05/31 - 06/15)

history of computing
history of computing
virtual memory
virtual memory
Google Architecture
virtual memory
Google Architecture
Google Architecture
Google Architecture
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
Why I use a Mac, anno 2006
Virtual Virtualizers
Virtual Virtualizers
Virtual Virtualizers
Google Architecture
Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Google Architecture
Google Architecture
Google Architecture
Mainframe Linux Mythbusting (Was: Using Java in batch
One or two CPUs - the pros & cons
Google Architecture
Google Architecture
Google Architecture
Dual Core CPUs are slower than Dual Single core CPUs ??
Token-ring vs Ethernet - 10 years later
Token-ring vs Ethernet - 10 years later
Google Architecture
Token-ring vs Ethernet - 10 years later
Token-ring vs Ethernet - 10 years later
virtual memory
One or two CPUs - the pros & cons
The very first text editor
One or two CPUs - the pros & cons
The very first text editor
Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
One or two CPUs - the pros & cons
virtual memory
Supercomputers Cost
Mainframe Linux Mythbusting (Was: Using Java in batch on
the new math: old battle of the sexes was: PDP-1
Mainframe Linux Mythbusting
Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Memory Mapped I/O Vs I/O Mapped I/O
virtual memory
DEC's Hudson fab
DEC's Hudson fab
DEC's Hudson fab
Supercomputers Cost
DEC's Hudson fab
DEC's Hudson fab
Large Computer Rescue
DEC's Hudson fab
Large Computer Rescue
Why no double wide compare and swap on Sparc?

history of computing

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: history of computing
Date: Wed, 31 May 2006 06:48:46 -0700
Tim Shoppa wrote:
Relational databases for this sort of thing - where the truly interesting stuff is by nature uncategorizable and the truly innovative stuff erases all the category definitions - are a mistake.

I somehow currently believe that spatial databases in some very high dimension are more useful for the concepts of "this inspired that" or "this evolved at the same time as that but elsewhere" or "this and that fed off each other" but even current spatial databases (a hot topic over the past decade) fall down on the spongy multivariate relations that I see.

We truly need a database system that is both spatial AND relational at the same time to be useful.

The best approximation today is probably hypertext/Wiki-like with humans drawing the links.


original relational/sql was system/r
https://www.garlic.com/~lynn/submain.html#systemr

at the same time i was doing some work for system/r ... i also got to do some stuff for a more network oriented implementation ... that had some of the same objectives as went into system/r implementation ... i.e. abstracting physical pointers that were characterisitc of 60s database implementation). thread discussing part of this issue:
https://www.garlic.com/~lynn/2005s.html#9 Flat Query
https://www.garlic.com/~lynn/2005s.html#12 Flat Query

a many times descendent of that work, i use for maintaining the information that I utilize for generating the RFC index
https://www.garlic.com/~lynn/rfcietff.htm

and the various merged glossaries and taxonomies
https://www.garlic.com/~lynn/index.html#glosnote

in fact, i suspect that the high ratio of hrefs to actual text accounts for the large number of daily hits from various search engine implementations ... almost as if they are using it for daily test case.

history of computing

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: history of computing
Date: Wed, 31 May 2006 08:24:04 -0700
Tim Shoppa wrote:
Clearly in making a post you can and do cross-reference back to previous posts, but it would be remarkable (in a Philip K Dick inspired moment) to figure out how to forward-reference to coming but not yet written posts.

ref:
https://www.garlic.com/~lynn/2006l.html#0 history of computing

so the past posts get updated in the header with Refed field which are the subsequent posts that reference the past posts; the RFC summary information for references, obsoletes, updates, etc ... are done similarly ... as are the merged taxonomy/glossary entries.

somewhat from a relational standpoint ... a side-effect of just loading information results in a kind of full normalization, full joining, and full indexing of the information just loaded ... along with side-effect of all full join/index operations are always bidirectional ... when there is a relationship from one-place to another ... there is automatically always a reverse relationship from the target back to the original. it is also trivial to approx. a RDBMS infrastructure (all primary index fields are unordered members of set) and at the same time also represent a hierarchical ordering (some members of the set may be precursor to other members of the same set) and/or mesh ... (some members of a set have various kinds of peer-to-peer relationships with other members of the same set).

it is difficult and/or somewhat contrived to approx. the full richness of the infrastructure in HTML or even in a purely relational/sql implementation.
https://www.garlic.com/~lynn/index.html

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: virtual memory
Date: Wed, 31 May 2006 08:56:51 -0700
robertwessel2@yahoo.com wrote:
Even the current 64 bit machines support pagein/pageout, but I don't think (64 bit) zOS uses it for anything. zVM supports it for 31 bit guests, of course.

re:
https://www.garlic.com/~lynn/2006k.html#57 virtual memory

the earlier 3033 had a hack on this in the late 70s related to 24-bit addressing. instructions were 24-bit both real addressing and virtual address.

i've commented before that the 3033s were feeling some pressure from 4341 clusters and the 16mbyte real memory address limit .... i.e. you configure a six 4341 cluster, each 1mip (6mips aggregate), each 16mbyte real (96mbytes aggregate), each having six i/o channels (36 i/o channels aggregate) for about the same cost as a 4.5mip, 16 i/o channel, 16mbyte 3033.

the 370 PTE (page table entry) had two undefined bits ... the standard PTE was 12bit real 4k page numbers (i.e. 12bit*12bit ... gives 24bit, 16mbyte addressing). IDALS actually had field for doing i/o with real address up to 31bits. so 3033 came out with 32mbyte real support ... and used one of the unused PTE bits to allow defining 13bit (8192) 4k real pages. IDALs could do i/o moving pages to/from disk ... above the 16mbyte line.

however, there was some number of things that required real addressing of contents of virtual pages ... and for this there was a game played with page table entries .... that had a virtual page point to a real page below the 16mbyte line and a virtual page above the 16mbyte line ... and would copy the contents of the virtual page above the line to the virtual page below the line.

misc. past posts mentioning the 3033 hack for 32mbyte real storage:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#19 address space
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 31 May 2006 21:33:08 -0600
"Eric P." writes:
I found such studies, which appear to show that VMS's mechanism is within, by looking at a graph, about 0.5% of LRU.

presumably this refers to "local" LRU as opposed to global/system LRU. what was their basis for "LRU" ... was it at the per instruction reference (updating the ordering of all pages as to the least recently used at a per instruction referenced level) ... or was it at a larger granularity?

one of the things that the several emulators did at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

was full instruction traces for exact storage reference sequence (at the instruction level) and then compared LRU against things like (belady's) OPT as well as the various implementations approx. LRU ... and then looked at global system thruput of the different mechanisms.

part of this contributed to the analytical model that grew into the sales/marketing assist tool available on the hone system
https://www.garlic.com/~lynn/subtopic.html#hone

called the performance predictor.

one of the issues was that standard (global or local) LRU degenerated to FIFO under various conditions and much worse than RANDOM and/or OPT. part of this was the slight of hand coding that i did in the global LRU approximated that would degenerate to RANDOM in cases when standard LRU degenerated to FIFO (and overall showed a 5-10 percent improvement over straight LRU).
https://www.garlic.com/~lynn/subtopic.html#wsclock

the other issue in various of these studies wasn't so much how close things got to LRU (or belady's OPT) but comparing overall system thruput using different strategies.

leading up to releasing my resource manager on 11may76
https://www.garlic.com/~lynn/subtopic.html#fairshare

we did thousands of simulations and benchmarks ... that helped calibrate both the stuff in the performance predictor as well in the resource manager. we then did a varioation on the performance predictor as well as a lot of synthetic benchmarks calibrated against workload profiles from hundreds of systems, along with an automated benchmarking infrastructure. the modified performance predictor got to choose the workload profile, configuration, and system parameters for the next benchmark, it then predicted what should happen, the benchmark was run, and the measured results compared against the predicted. the modified performance predictor updated the most recent results ... and then used all the past benchmarks results plus the most recent to select the next benchmark. leading up to the final release of the resource manager ... 2000 such benchmarks were run taking three months elapsed time.
https://www.garlic.com/~lynn/submain.html#bench

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 01 Jun 2006 10:43:19 -0600
Bill Richter wrote:
It appears that Google architecture is the antithesis of conventional mainframe application achitecture in all aspects.

http://labs.google.com/papers/googlecluster-ieee.pdf

and the difference between that and loosely-coupled or parallel sysplex?


long ago and far away, my wife was con'ed to going to POK to be in charge of loosely-coupled architecture ... she was in the same organization with the guy in charge of tightly-coupled architecture. while she had come up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

it was tough slogging because all the attention was focused on tightly-coupled architecture at the time. also she had battles with the sna forces ... who wanted control of all communication that left the processor complex (i.e. outside of direct disk i/o, etc).

part of the problem was that in the early days of SNA ... she had co-authored a "peer-to-peer" network architecture with Bert Moldow ... AWP39 (somewhat viewed in competition with sna). while SNA was tailored for centralized control of a large number of dumb terminals ... it was decidedly lacking in doing peer-to-peer operations with large numbers of intelligent peers.

a trivial example was sjr had done cluster 4341 implementation used highly optimized peer-to-peer protocols running over a slightly modified trotter/3088 (i.e. eventually came out as conventional ctca ... but with interconnection for eight processors/channels). peer-to-peer, asynchronous could achieve cluster synchronization in under a second elapsed time (for eight processors). doing the same thing with SNA increased the elapsed time to approx. a minute. the group was forced to only release the SNA-based implementation to customers ... which obviously had severe scaling properties as the numbers in a cluster increased.

recent cross-over reference mentioning 4341 clusters
https://www.garlic.com/~lynn/2006l.html#2 virtual memory

the communication division did help with significant uptake of PCs in the commercial environment. a customer could replace a dumb 327x with a PC for approx. the same price, get datacenter terminal emulation connectivity and in the same desktop footprint also have some local computing capability. as a result, you also found the communication group with a large install base of products in the terminal emulation market segment (with tens of millions of emulated dumb terminals)
https://www.garlic.com/~lynn/subnetwork.html#emulation

in the late 80s, we had come up with 3-tier architecture (as an extension to 2-tier, client/server) and were out pitching it to customer executives. however, the communication group had come up with SAA which was oriented trying to stem the tide moving to peer-to-peer networking, client/server, and away from dumb terminals. as a result, we tended to take a lot of heat from the SAA forces.
https://www.garlic.com/~lynn/subnetwork.html#3tier

in the same time frame, a senior engineer from the disk group in san jose managed to sneek a talk into the internal, annual world-wide communication conference. he began his talk with the statement that the communication group was going to be responsible for the demise of the disk division. basically the disk division had been coming up with all sorts of high-thruput, peer-to-peer network capability for PCs and workstations to access the datacenter mainframe disk farms. the communication was constantly opposing the efforts, protecting the installed base of terminal emulation products. recent reference to that talk:
https://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?

i had started the high-speed data transport project in the early 80s ... hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and had a number of T1 (1.5mbit) and higher speed links for various high-speed backbone applications. one friday, somebody in the communication group started an internal discussion on high-speed communication with some definitions ... recent posting referencing this
https://www.garlic.com/~lynn/2006e.html#36
definitions from communication group


low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following monday, i was in the far-east talking about purchasing some hardware and they had the following definitions on their conference room wall

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         >600mbits

part of this was the communication division 37xx product line only supported up to 56kbit links. They had recently done a study to determine if T1 support was required ... which concluded that in 8-10 years there would only be 200 mainframe customers requiring T1 communication support. The issue could have been that the people doing the study were suppose to come up with the results supporting the current product line ... or maybe they didn't understand the evolving communication market segment, or possibly both.

their methodology was to look at customers using 37xx "fat pipes" ... basically being able to operate multiple parallel 56kbit links as a simulated single connection. They found several customers with two parallel links, some with three parallel links, a few with four parallel links and none with higher number. Based on that, they projected that it would take nearly a decade before there were any number of customers with parallel links approaching T1 (1.5.mbits) capacity.

the problem with the analysis at the time was that the telcos were tariffing T1 at approx. the same as five 56kbit links. customers going to more than four 56kbit links were buying full T1 and operating them with hardware from other vendors. A trivial two-week survey turned up 200 mainframe customers with full T1 operations ... something that the communication group was projecting wouldn't occur for nearly another decade.

so last fall, i was at a conference and there was a talk about "google as a supercomputer". the basic thesis was that google was managing to aggregate large collection of processing power and data storage well into the supercomputer range ... and doing it for 1/3rd the cost of the next closest implementation (in terms of cost).

slightly related old post
https://www.garlic.com/~lynn/95.html#13

from when we were working on scaling for our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

virtual memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 01 Jun 2006 12:02:34 -0600
"Eric P." writes:
The article I'm looking at is not that specific, and I do not have access to the ACM see what the cited papers say.

I suspect they mean True LRU. As the local working set size shrinks, it will toss out and then quickly fault back (soft fault) pages. Therefore as the working set size shrinks, the remaining elements asymptote to LRU. A working set of 1 page will be the LRU page.

I am also interested in seeing the cite list for those analysis papers. I note that the dates on the papers are all 2 years after VMS was first sold (1979) and 7 years after its design was started (1975). Presumably they had something more than a hope and a prayer to go on when they came up with their approach.


under constrained resources, LRU replacement may even have more predisposition towards FIFO ... i.e. the applications only have had opportunity to touch a series of pages in sequence ... and there is little or no difference between LRU replacement and FIFO replacement (since the first page in is also the least recently referenced page). It is frequently only in less constrained environment where the application has had opportunity to have repeated references ... and that the LRU reference pattern starts to diverge from FIFO.

it was in these particular circumstances (where LRU replacement degenerated to equivalent to FIFO replacement) that RANDOM replacement can be significantly better than LRU/FIFO replacement.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 01 Jun 2006 12:33:10 -0600
Chris.Craddock@CA.COM (Craddock, Chris) writes:
GOOGLE is certainly a loosely coupled architecture, but as you of all people would know, there are significant differences between that and a parallel sysplex. The main feature they (and Amazon as well btw) focus on is the full burdened price of their computational units including power, cooling, footprint etc. and that makes economic sense for them given the nature of their business application.

re:
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture

so the issue is effectively how fast fault isolation/recovery/tolerant technology becomes commodized. this is somewhat the scenario that happened with RAID ... when they first appeared, they were frequently depreciated compared to "mainframe" DASD ... but since then, they've effectively turned into the standard.

for a little drift, i've repeated several times before what I did for i/o supervisor for the dasd engineering and product test labs (bldg. 14 & bldg 15)
https://www.garlic.com/~lynn/subtopic.html#disk

they had "testcells" ... basically hardware under development ... the term testcells somewhat come from the security provisions ... the test hardware were in individual heavy steel mesh "cages" (testing cells) ... inside a secured machine room.

they had tried doing testing in an operating system environment ... but at the time, MVS had a MTBF of 15 mins operating with a single testcell. i undertook to rewrite the i/o supervisor so that it would never fail ... even when operating half-dozen to a dozen testcells concurrently .... allowing the processor complex then to also be used for some number of other tasks concurrently.

bldg 14/15 tended to get early engineering models of processors ... also as part of disk testing. however, in the "stand-alone" mode of operation ... the processors were dedicated to scheduled i/o testing (which tended to be less than one percent cpu utilization).

with the bullet proof i/o supervisor ... the idle cpu could be put to other tasks. at one point, bldg. 15 got the first engineering 3033 (outside of POK) dedicated for disk i/o testing. however, once we had testing going on in an operating system environment, we could take advantage of essentially, an otherwise idle processor.

one of the applications that we moved onto the machine was the air bearing modeling that was going on as part of the development of the 3380 floating heads. SJR had a 370/195 that was being operated as an internal service ... and the air bearing modeling might get an hour or so a couple times a month. however, with essentially an idle 3033 sitting across the street ... we could drastically improve that (the 370/195 was peak rated around 10mips ... but most codes ran in the 5mip range ... and the 3033 was in the 4.5mip range ... but essentially unlimited amounts of 3033 time was still better than a hr or so of 370/195 time a couple times a month).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 01 Jun 2006 20:17:52 -0600
rkuebbin writes:
Acutally financial query might use this architecture, except that the significant difference between this workload and "typical commercial" workloads -- they do no updates. Therefore no data integrity issues.

re:
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture

we took some amount of heat in the 80s from the communication group working on high-speed data transport
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and 3-tier architecture (as extension of 2-tier, client/server)
https://www.garlic.com/~lynn/subnetwork.html#3tier

then in the early 90s ... when we were working on scaling non-mainframe loosely-coupled for the commercial market
https://www.garlic.com/~lynn/subtopic.html#hacmp

we got hit and told we couldn't work on anything involving more than four processors ... minor reference:
https://www.garlic.com/~lynn/95.html#13

however, the cluster scaling has evolved in a number of ways. high-energy physics picked it up and evolved it as something called GRID. a number of vendors also contributed a lot of work on GRID technology and since are out pushing it in various commercial market segments ... including financial. some of the early financial adopters are using GRID for doing complex financial analysis in real-time.

some topic drift ... i gave a talk a couple years ago at the global grid forum
http://forge.ggf.org/sf/docman/do/listDocuments/projects.ogsa-wg/docman.root.meeting_materials_and_minutes.ggf_meetings.ggf11

.... select GGF11-design-security-nakedkey in the above.

misc. GRID related news article in the commercial market

Investment Banks Using Grid Computing Models
http://www.epaynews.com/index.cgi?survey=&ref=browse&f=view&id=1148478974861413176&block=
ASPEED Taking Financial Grids to the Next Level
http://www.gridtoday.com/grid/673718.html
Wachovia uses grid technology to speed up transaction apps
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9000476
Grid Computing That Heals Itself
http://www.enterpriseitplanet.com/networking/news/article.php/3606041
GRID WORLD 2006: New IBM software brings autonomic computing to Grids
http://www.enterprisenetworksandservers.com/newsflash/art.php?589

as somewhat referenced in a couple of the above ("batch processing going back 50 years")... bringing "batch" to GRID can be somewhat viewed as JES3 on steroids. before getting con'ed into going to pok to be in charge of loosely coupled architecture
https://www.garlic.com/~lynn/submain.html#shareddata

my wife had been in the JES group in g'burg. She had been one of the catchers for ASP ... as part of its transformation into JES3. She had also done a business analysis of the major JES2 and JES3 features as part of proposal for creating a merged product. however, that never made it very far ... in part because of a lot of internal politics.

random past posts mentioning jes3:
https://www.garlic.com/~lynn/99.html#58 When did IBM go object only
https://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/2000.html#13 Computer of the century
https://www.garlic.com/~lynn/2000.html#76 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#48 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002k.html#48 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#39 spool
https://www.garlic.com/~lynn/2004o.html#32 What system Release do you use... OS390? z/os? I'm a Vendor S
https://www.garlic.com/~lynn/2005o.html#39 JES unification project
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#0 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#15 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#16 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#19 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#32 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005s.html#16 Is a Hurricane about to hit IBM ?
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google Architecture

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 02 Jun 2006 05:56:04 -0600
Anne & Lynn Wheeler wrote:
however, the cluster scaling has evolved in a number of ways. high-energy physics picked it up and evolved it as something called GRID. a number of vendors also contributed a lot of work on GRID technology and since are out pushing it in various commercial market segments ... including financial. some of the early financial adopters are using GRID for doing complex financial analysis in real-time.

re:
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#7 Google Architecture

recent news article from yesterday

Cern seeks to tighten security for data grid
http://www.vnunet.com/computing/news/2157258/cern-seeks-tighten-security

from above:
Although large data grids are only starting to be used in business, Cern is seeing a lot of interest from industry. The lab is developing grids that will reach across organisational boundaries, allowing multiple institutions to share resources.

'Businesses are now becoming interested in this kind of grid,' said Grey. 'Its use could enable suppliers and companies to share resources and large corporations to share information between business units. Grid technology will only be adopted if the right type of security solutions are available.'


... snip ...

other references:
http://www-128.ibm.com/developerworks/library/gr-watch1.html
http://www.alphaworks.ibm.com/tech/optimalgrid
http://www-128.ibm.com/developerworks/grid
http://www.gridcomputingplanet.com/news/article.php/3281_1480781
http://www.ggf.org/UnderstandingGrids/ggf_grid_und
http://www.semanticgrid.org/GGF/erstand.php
http://gridcafe.web.cern.ch/gridcafe/

virtual memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 02 Jun 2006 07:29:39 -0600
Brian Inglis writes:
RSX with vm? Pages (512B) were too small for such an approach: fault context switch time and handling path length were likely longer than user code execution time.

one of the things that some of the paging simulation work done at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

looked at the trade-offs of different sized pages. in (severely) real storage constrained environments ... smaller page sizes met that applications could run efficiently with smaller real storage requirements (i.e. portions of larger page that weren't actually needed by an application would not need to occupy real storage). larger page sizes tended to increase the real storage required by an application to run ... but it tended to result in more efficient use of the paging i/o infrastructure moving larger blocks of data to/from memory.

this was somewhat seen in the 370 architecture from the early 70s. it offered both 2k and 4k page sizes. the dos/vs and vs1 operating systems used 2k page sizes ... since they tended to run on lower end 370 machines which typically had a couple hundred k of real memory (sometimes after fixed kernel requirements ... maybe 100k to 200k ... i.e. 50-100 2k pages ... or 25-50 4k pages). os/vs2 and vm ran with 4k pages.

the trade-off was seen in transition of vs1 to the later 370/148 which had 1mbyte real storage.

vs1 was adaptation of real storage MFT system to virtual memory ... somewhat akin to the earlier description of svs
https://www.garlic.com/~lynn/2006b.html#26 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#2 Multiple address spaces
https://www.garlic.com/~lynn/2006i.html#43 virtual memory

vs1 would define a single 4mbyte virtual address space from which it ran multiple applications ... performing paging operations using 2k pages (typically 370/135 and 370/145 with 256kbyte to 512kbyte real storage).

in the mid-70s, the 138 and 148 follow-on to 135/145 were introduced with 148 configuration typically having 1mbyte of real storage.

A lot of 370/148 configurations operated with vm/370 as the base host system running vs1 in a 4mbyte virtual machine. vs1 would be configured that its 4mbyte virtual address space would be mapped one-for-one to the virtual machine address space. As a result, all 4mbytes of the virtual address space would appear to always be resident and never result in a page fault. A special interface was defined that allowed VM/370 to reflect any virtual machine page faults to the VS1 supervisor (allowing the VS1 supervisor to perform task switch ... although the VS1 supervisor wouldn't otherwise have to perform any operation in support of paging). When VM/370 finished processing the page replacement ... it then could single the VS1 supervisor that the page was now available.

So there were some number of comparisons running VS1 on a "bare" 370/148 with 1mbyte real storage and 2k virtual pages and running VS1 on the same hardware in a virtual machine under VM/370 doing paging with 4k virtual pages (with the total real storage available to VS1 operation reduced by the VM/370 fixed kernel requirements). It turned out that VS1 frequently ran faster under VM/370 than it did running on the bare hardware; i.e. VM/370 performing 4k page i/o operations (for VS1 ... rather than relying on VS1 to perform 2k page i/o operations) more than offset the overhead degradation of operating in a virtual machine.

The issue was that in the mid-70s with the transition to 1mbyte (and larger real storage sizes), systems had changed from being significantly real-storage constrained ... to only being moderately real-storage constrained ... and that the page fault handling and page i/o operations were starting to dominate as source of system bottleneck.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 02 Jun 2006 09:32:13 -0600
Del Cecchi writes:
And the 360 had something called LCS for "Large Core Storage" that was slower and cheaper than main storage. Of courese it wasn't used for paging.....

I saw two different strategies in use of LCS ... a 360/65 installation might have 1mbyte of 750nsec access memory and 8mbytes of 8mic access LCS. one strategy was to allocate lower-use instructions and data in LCS and access them directly in LCS. There were also some strategies that used it sort of like an electronic disk/cache (sort of like some of the electronic disks that were on IBM/PCs) ... where the operating system would copy/load stuff to "fast" memory prior to usage.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 02 Jun 2006 12:15:29 -0600
Anne & Lynn Wheeler writes:
one of the things that some of the paging simulation work done at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech


re:
https://www.garlic.com/~lynn/2006l.html#9 virtual memory

part of the instruction storage ref/change trace and paging simulation was also the program reoganization work that was mentioned previously
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

which was eventually released in 1976 as vs/repack

not only were various different sized pages simulated ... but the traces were also used for program re-organization to compact storage (that was used together) into minimum number of pages.

the "small" page advantage in severe real storage constrained configurations was to avoid unnecessarily bringing in parts of virtual storage ... that weren't going to be needed ... which might be brought in when using larger page sizes, increasing the total amount of real memory (unnecessarily).

however with program re-organization ... there would be high probability that adjacent storage would in-fact, be required ... so larger pages would tend to still occupy about the same amount of total real storage (compared to the smaller page sizes) ... with the advantage that the number of faults and fetches to bring in the virtual memory was reduced (fetching chunks in fewer, larger sizes)

as real storage sizes increased ... the benefit of high compaction and utilization from smaller pages would be less of an advantage ... and the larger number of page faults and transfers would become an increasingly significant throughput factor (vis-a-vis larger page sizes with fewer faults and fetches). also, the program restructuring tended to compact things w/o requiring the additional benefit of smaller page sizes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 08:26:35 -0600
Bill Todd writes:
You don't suppose that could possibly have been because advances in the state of the art often appear earlier in academic papers than in commercial applications, do you? There is, after all, a rather considerable time-lag (often infinite, in fact) between such an advance and its incorporation into an existing, complex OS.

for some drift ... there were academic papers and implementations of local LRU at the time i did global LRU and it started shipping in standard operating system ... and as referenced, it was a decade later that there was work on global LRU in academia .... although there was also significant (academic) pressure to suppress the publication of the global LRU work (i've been told that there is a website/blog someplace containing copies of all the correspondence that flew back and forth during the disagreement).

i believe i somewhat contributed to allowing the academic global LRU work to be published by citing real-live data showing both local LRU AND global LRU implementations for the same operating system running on (nearly) the same hardware with (nearly) the same kind of workload (although the global LRU environment was running on hardware that had only 2/3rds the real memory for paging, twice the number of users/workload, and still had better performance than the local LRU implementation).

a few past references:
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006i.html#38 virtual memory
https://www.garlic.com/~lynn/2006i.html#42 virtual memory
https://www.garlic.com/~lynn/2006j.html#18 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 09:01:17 -0600
Bill Todd writes:
Other architectures that were not designed to cover a range of application down into the minicomputer space. And since VMS included mechanisms to support multi-page paging operations, it managed quite well in larger applications as well.

in the very early 80s, both MVS and VM shipped "big page" support for 3380s ... basically full-track transfers, ten 4k pages. there was a number of things in big pages implementation that optimized the arm access bottleneck *problem* that was becoming dominant system bottleneck by that period.

lots of past posts describing/discussing "big page" implementation.
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
https://www.garlic.com/~lynn/2006j.html#2 virtual memory
https://www.garlic.com/~lynn/2006j.html#3 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 10:01:30 -0600
Bill Todd writes:
That's getting repetitive, I'm afraid. As I stated earlier, I'm interested in head-to-head studies using repeatable traces rather than anecdotes with far less stringent controls on several significant variables.

re:
https://www.garlic.com/~lynn/2006l.html#12 virtual memory

when i was an undergraduate in the 60s ... i did several score different kinds of implementations ... however, i was much more interested in shipping the best implementation for production systems than doing anything with academic publications. there has been very little of the work done in that period that made in into any academic literature. the best apples-to-apples comparison that i know of from the period was the grenoble science center and cambridge science center ... one doing local lru and the other doing global LRU ... both on cp/67 operating system, both on 360/67 machines, and both with similar cms workloads.

in the past i've periodically mentioned observing people in conflicts spending more resources on the argument than the resources required for each participant to go off and implement all possible variations and measure the results.

so one possible solutation is for you to do like I did as an undergraduate in the 60s and go off and implement several score different variations, deploy them in production environments and compare the measurements.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 10:13:19 -0600
Morten Reistad writes:
ISTR 2k pages was also quite common.

previous post mentioning 370s shipping with both 2k and 4k virtual page support and comparison of operating systems in different environments that used 2k virtual pages with operating systems that used 4k virtual pages ... sometimes on the same 370 hardware.
https://www.garlic.com/~lynn/2006l.html#9 virtual memory

in fact, there was an infamous customer situation that involved upgrading from a 370/168-1 (with 16kbyte cache) to 370/168-3 (with 32kbyte cache).

the customer ran vs1 (with 2k virtual pages) under vm/370. vm/370 nominally ran with 4k virtual pages ... except doing shadow page tables for virtual machine guest operating system .. it used the same page mode (2k or 4k) for the shadow page tables as the virtual machine guest operating system used for their virtual page tables.

so for the transition for 370/168-3 with twice the cache size ... they choose to use the "2k" bit to address the additional cache line entries (under the assumption that this was a high-end machine only used by MVS or some vm). if the 370/168-3 ran with 2k virtual pages it would switch to 16k cache size mode (not using the additional 16k). if there was a switch between 4k virtual page mode and 2k virtual page mode, it would first completely flush the cache.

so for this particular customer, the upgrade from the 370/168-1 to the faster 370/168-3 actually resulted in degraded performance (because of the large number of cache flushes).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 11:10:30 -0600
Bill Todd writes:
A sufficiently large value of 'fine' to have propelled DEC to a firm second place in the world of computing, with realistic potential to have competed for the top spot in a finite amount of time. The fact that a decade or so *after* its introduction the VAX page size may have become a bit constraining does not reflect poorly on its choice, only on DEC's failure to have created a suitable successor in a timely manner - and the fact that VAXen were still selling 22 years after their introduction indicates that for at least some significant uses a 512 B page size was *still* 'fine' then.

you could also observe that cp/40 started using 4k virtual pages in 1966 ... and has gone thru several subsequent generations as cp/67, vm/370, vm/sp, vm/xa ... and continues up thru today support 4k virtual pages ... 40 years later.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 11:51:50 -0600
Brian Inglis writes:
AFAIR the complaints were not generally against consolidation, but execution of a short sighted strategy that alienated many existing customers by making them look stupid to their management for having bought DEC: drop upgrades and enhancements for mini and mainframe lines, offer systems that provided no more performance than an 11/70 until the 8000 series came out five years later, support backward compatibility with a real time OS but with interrupt latency that prevented it from being used for much of that work, etc.

Sales seemed interested in taking orders only for VAXen, and actually argued against purchases of 11s and 20s!

So growing customers had to look elsewhere for a forward path, unless they had locked themselves in with extensive investments in DEC hardware or software, and could not afford to get out.


the mid-range 370 4341 had somewhat similar issue. i've posted the us & world-wide sales for vax, broken out by model and year
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors

... and noted that 4341 sales were actually larger than vax. part of it was some sort of threshold had been crossed and you saw large corporations placing orders for several hundred 4341s at a time.
https://www.garlic.com/~lynn/2006k.html#31 PDP-1
https://www.garlic.com/~lynn/2006k.html#32 PDP-1

the issue for the corporation was while the 4341 offered better price/performance than vax ... it was also better performance and better price/performance than the entry level model of the high-end machines. the high-end was where the majority of the revenue was ... and even tho 4341 was larger market than vax ... it still wasn't very significant revenue compared to the high-end.

towards the mid-80s, one of the executives of the high-end division gave a talk that included the comment that something like 11,000 of the vax sales should have been 4341s.
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

as mentioned here the total of 750 & 780 sales (thru 1988) was approx. 46k
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

... so the 11,000 that should have been 4341s represented nearly 1/4th of the total combined 750 & 780 sales

however, one of the things that they failed to note was that in some of the internal politics between the mid-range 4341 division and the high-end division ... the high-end division managed to limit allocation of some components needed for buidling 4341s (which was viewed by some as trying to minimize the impact 4341 sales could have on the high-end entry-level machines).

of course by the mid-80s, you were starting to see customers in that mid-range market moving to workstations and high-end PCs for their computing needs. you can also see that in the vax sale numbers in the mid-80s ... while there was continued growth in vax sales during that period ... much of the numbers had shifted to micro-vaxes.

as posted before ... in the later 80s the world wide corporate revenue was approaching $60b and executives were projecting that it would double to $120b (they turned out to be wrong, it was possibly a career limiting move on my part at the time to start predicting that the company was heading into the red).
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?

this gives revenue for 86 as $51.25b and over 400k employees.
http://www-03.ibm.com/ibm/history/history/year_1986.html
and $58.6b for 1988
http://www-03.ibm.com/ibm/history/history/year_1988.html

quicky search turns up this for a little comparison, this gives DECs revenue for the last qtr of 1985 and 1986 ($1.9b and $2.3b respectively)
http://query.nytimes.com/gst/fullpage.html?res=9D05E0D7123AF936A25752C0A961948260

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 12:38:47 -0600
Anne & Lynn Wheeler writes:
the issue for the corporation was while the 4341 offered better price/performance than vax ... it was also better performance and better price/performance than the entry level model of the high-end machines. the high-end was where the majority of the revenue was ... and even tho 4341 was larger market than vax ... it still wasn't very significant revenue compared to the high-end.

I've posted before about having rewritten the i/o supervisor so the disk engineering and product test labs (bldg14 & bldg15) could do their stuff in an operating system environment
https://www.garlic.com/~lynn/subtopic.html#disk

bldg15 had gotten the first 3033 engineering model outside of pok ... and the disk testing rarely used more than one percent of the cpu ... so we could do all sorts of stuff with the rest of the processing power (one was to move over the air bearing simulation work that was being used for designing the 3380 floating heads) ... recent reference:
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture

similarly, got the first engineering 4341 outside of endicott ... in fact, i had better 4341 access than the 4341 performance test group in endicott had ... so they had me run various of their benchmarks on the 4341 in san jose. misc. old posts about 4341 benchmarks
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)

a couple old posts comparing original 4331 & 4341 vis-a-vis vax
https://www.garlic.com/~lynn/2002k.html#1 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2002k.html#1 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?

and some approx. mip rates for later 4341 models:
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 03 Jun 2006 18:52:26 -0600
Peter Flass writes:
I believe I've posted before, as an observation from the trenches at the time, that he was probably whistling in the dark. For large numbers of users the 43xx software made it less usable than the VAX. At the time I was managing a college computer center with mixed administative/acaademic use of one machine, and the VAX was definitely a better fit than the 4341. I loved the 4341, but neither MVS nor VM would have really worked in our environment.

For the large corporations, no doubt compatibility with existing mainframe systems swung the balance the other way.


re:
https://www.garlic.com/~lynn/2006l.html#18 virtual memory

there was a share study that observed that 4341 had a lot more infrastructure support requirement than vax ... so while 4341 systems may have been better system price/performance ... there was requirement for scarce resources to support the machine that were hard to find (especially in smaller shops looking at only a machine or two, it was a lot easier to amortize such resources over several hundred or a thousand or two such machines).

we had done quite a bit of work somewhat in conjunction with the ecps effort for the 148 ... random reference:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

(follow-on to the 145 and precursor to the 4341) that signifantly addressed lots of those issues. however, that part of the effort was rejected by corporate ... it wasn't clear whether they were ambivalent about moving various aspects of mainframe into more ease of use direction (also part of it included effectively incorporating virtual machine support into base of all shipped machines ... somewhat analogous to modern day lpar support ... which corporate wasn't ready for either ... they still thot there was some chance of killing off virtual machine operation).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why I use a Mac, anno 2006

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why I use a Mac, anno 2006
Newsgroups: comp.sys.mac.advocacy,alt.folklore.computers
Date: Sun, 04 Jun 2006 11:13:37 -0600
George Graves writes:
Are you trying to tell us that Windows IS secure? That dog won't hunt, my friend.

recent thread mentioning the eu finread standard ... work on the standard was in part because of widely held view that most PCs have a large number of vulnerabilities
https://www.garlic.com/~lynn/aadsm23.htm#45 Status of SRP
https://www.garlic.com/~lynn/aadsm23.htm#49 Status of SRP
https://www.garlic.com/~lynn/aadsm23.htm#50 Status of SRP

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual Virtualizers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual Virtualizers
Newsgroups: comp.arch
Date: Sun, 04 Jun 2006 11:07:50 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
Clearly, if it is called upon to simulate operations that it didn't expect to have to simulate, the basic idea of virtualization would be broken - the program would learn it didn't have privileges in reality that it thought it had.

Thus, it seemed to me that the hardware would have to keep track of *three* copies of the program status word. One shows the privilege level that a virtualized program thinks it has, a second shows the privilege level it actually has - and a third shows the privilege level that its parent *thinks* it actually has.

When an operation is attempted by a child process which the parent process believed it could delegate, then one has to skip levels *upwards* to find the process that would actually do the simulation.

Is this an issue anyone has actually dealt with in real architectures, or has virtualization heretofore only allowed machines without the ability to have virtual machine child processes to be virtualized?


virtualization could be recursive to arbitrary depth.

i've posted several times before joint project in early 70s (35 years ago) between cambridge
https://www.garlic.com/~lynn/subtopic.html#545tech

and endicott to provide 370 virtual machines under cp67 ... in part because there were misc. instruction and field definitions between 360/67 virtual memory structure and 370 virtual memory structure; probably the first use of the internal network in support of a distributed project
https://www.garlic.com/~lynn/subnetwork.html#internalnet

first, was to create a version of cp67 kernel that provided 370 virtual machines ... called "cp67h". then there was a modification of cp67 to run in 370 virtual machine (rather than 360 called "cp67i").

virtual memory hadn't yet been announced for 370 so it was suppose to be a super secret. the cambridge cp67 provided access to non-employees, included students from various educational institutions in the cambridge area (mit, harvard, bu, etc) ... so there was concern if cp67h (providing 370 virtual machines) were to run on the bare hardware ... that details of 370 virtual memory would leak.

so cp67h was normally run in 360/67 virtual machine under cp67l (which was running on the real 360/67 hardware)

so standard operation was something like

cms running in a 370 virtual machine provided by cp67i cp67i running in a 370 virtual machine provided by cp67h cp67h running in a 360/67 virtual machine provided by cp67l cp67l running on real 360/67 machine

so there was recursive tracking of the PSW ... however, the managing PSW status was rather trivial compared to managing the shadow page tables.

you can sort of think of shadow page tables as the TLB for the virtual machine.

cp67i had a set of virtual page tables that mapped cms virtual address space addresses to the addresses into cp67i's (virtual) machine addresses.

cp67h had to simulate cp67i's page tables with shadow page tables. there were replication of cp67's page tables but having cp67h (virtual) machine addresses substituted for the addresses in cp67i's page tables.

then cp67l had to simulate cp67h's page tables (including cp67h's shadow tables simulating cp67i's page tables) using shadow tables that were duplicates of cp67h's page tables ... but substituting the "real" machine addresses for the cp67h (virtual) machine addresses.

misc. past posts mentioning cp67l, cp67h, cp67i stuff
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?
https://www.garlic.com/~lynn/2005j.html#50 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT

misc. past posts mentioning shadow page tables:
https://www.garlic.com/~lynn/94.html#48 Rethinking Virtual Memory
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004l.html#67 Lock-free algorithms
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#70 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005h.html#11 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#38 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005k.html#39 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005k.html#42 wheeler scheduler and hpo
https://www.garlic.com/~lynn/2005n.html#47 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/2005o.html#8 Non Power of 2 Cache Sizes
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#6 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#12 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#19 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006l.html#15 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual Virtualizers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual Virtualizers
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 04 Jun 2006 11:59:29 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
On my web page, at

http://www.quadibloc.com/arch/ar0703.htm

I note a subtle precaution that needs to be taken on a computer architecture which allows full, unrestricted virtualization - including the ability to run virtualization software *in a virtual machine*.

Basically, the snag I saw was this:

A program running in a virtual machine may believe itself to have privileges it does not; when it executes an instruction covered by this, its parent simulates them for it.

What happens, though, when that program is itself a parent - and it includes, among the native privileges given to a child process, some which it only *believes* itself to posess?


re:
https://www.garlic.com/~lynn/2006l.html#21 Virtual Virtualizers

so the original VMA (virtual machine assist) was done on the 370/158 (and then replicated on the 370/168.

vm370 would load a special value in control register six (CR6) that was otherwise defined as unused/reserved in 370 architecture.

normally virtual machines were run in "problem" state ... and all "privileged/supervisor" state instructions would interrupt into the vm370 hypervisor for simulation. with CR6 loaded, there was support for some supervisor state instructions to be executed in problem state using virtual machine rules.

vm370 ran into some issues with this with cms shared, protected pages. for the morph from cp67/cms to vm370/cms, cms was reorganized to take advantage of the vm370 architecture for 64k (16 4k page) segment "protect". segment table entry for different virtual address spaces could point at the same page table (allowing multiple different virtual address spaces to share the same real pages). 370 virtual memory architecture provided for a "read-only" bit in segment table entry.

however, the hardware development to retrofit virtual memory to 370/165 got something like six months behind schedule. in an escalation meeting, the 165 engineers said they could make up six months if certain features from 370 virtual memory architecutre was dropped ... one of those features was segment protection. that proposal carried the day ... however, the cms shared segment (having multiple different virtual address spaces sharing the same real pages in a protect manner) was impacted.

so, in order to preserve the capability of sharing common real pages ... it was necessary to drop back to a hack using standard storage protection (carried forward from 360 architecture). this provided setting a storage protect key associated with every 2k block of storage (which could apply just store protect or both store & fetch protect). the program status word then carried a corresponding storage protect key. a PSW storage protect key of zero allowed instructions to access/modify any storage. A non-zero PSW storage protect key met that instructions could only access/modify 2k blocks of storage that had storage protect key that matched the PSW storage protect key.

So CMS virtual address spaces were run with a hack that forced all zero storage protect keys (in the cms virtual address space) to non-zero. Furthermore, any new (virtual) PSW loaded by CMS with a zero (PSW) storage protect key was forced to non-zero. The problem was that the virtual machine microcode assist (VMA) only had the rules for standard virtual machine privilege instruction operation (it didn't provide for supporting the fiddling of storage protect keys). As a result, CMS virtual address spaces with "shared segments" had to be run with VMA disabled. It was possible to run a CMS virtual address space w/o any shared segments ... and have VMA enabled ... but the advantage of running with shared real pages (and w/o VMA) was normally much larger than the advantage of running with VMA enabled (and w/o shared real pages).

VMA was signficiantly expanded for the 370/xa on 3081 with SIE instruction. recent post referencing an old SIE discussion
https://www.garlic.com/~lynn/2006j.html#27 virtual memory

Amdahl then introduced "hypervisor" which was a virtual machine subset all implemented in hardware/microcode (not requiring a software virtual machine hypervisor) ... also mentioned in the SIE ref.

response was "PR/SM" on the 3090 ... which offered similar capability. this evolved into LPARs. Initially PR/SM provided assist for only one level "deep" ... i.e. if a software virtual machine operating system was running in a PR/SM (LPAR) hypervisor ... a lot of the virtual machine assist for its virtual machine was not operational.

there have been recent discussions in various mainframe n.g. about PR/SM being upgraded to handle two level "deep" operation ... i.e. virtual machines running under a software virtual machine operating system which, in turn, was running in PR/SM (LPAR) hypervisor, would have virtual machine microcode assists operational.

misc. past posts about the cms storage protect hack for shared segments ... after shared segment protect was dropped from 370 virtual memory architecture:
https://www.garlic.com/~lynn/2005.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#46 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#9 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: what does it really mean?
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006j.html#5 virtual memory
https://www.garlic.com/~lynn/2006j.html#41 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual Virtualizers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual Virtualizers
Newsgroups: comp.arch
Date: Sun, 04 Jun 2006 12:17:35 -0600
Anne & Lynn Wheeler writes:
so there was recursive tracking of the PSW ... however, the managing PSW status was rather trivial compared to managing the shadow page tables.

re:
https://www.garlic.com/~lynn/2006l.html#21 Virtual Virtualizers

the other "shadow" process is for virtual machine I/O programs. the operating system in the virtual machine is building I/O programs using what it thinks are "real addresses" (but in fact are virtual addresses of the virtual machine).

for i/o programs, the supervisor implementating the virtual machine support has to make a "shadow" copy of the i/o program in the virtual machine and then swizzle all the virtual machine addresses to "real" addresses (as well a pinning the associated virtual pages to the real addresses for the duration of the i/o).

with regard to sie, pr/sm and recursive virtual machine support by hardware virtual machine performance assists ...
https://www.garlic.com/~lynn/2006l.html#22 Virtual Virtualizers

here is a discussion of some of the changes for zSeries hardware
http://www.vm.ibm.com/perf/tips/z890.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 05 Jun 2006 07:36:00 -0600
Craddock, Chris wrote:
Pat Helland (formerly with Tandem and MS, now with Amazon) has written some very lucid and entertaining discussions about how economics are changing their system design points. He was one of the originators of the Tandem Non-Stop transaction system and a life-long transaction processing bigot. Now he's talking openly about his ACID apostasy. If Pat is ready to cast that aside, I think everyone else ought to at least take it seriously.

re:
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture

a lot of the ACID (and TPC) stuff originated with Jim. When Jim left system/r group (original relational/sql implementation):
https://www.garlic.com/~lynn/submain.html#systemr

and went to tandem, we would frequently drop by and visit him. In fact, I got blamed for something called tandem memos ... couple posts with minor refs:
https://www.garlic.com/~lynn/2005c.html#50
https://www.garlic.com/~lynn/2006h.html#9

Later when we were doing ha/cmp (on non-mainframe platform)
https://www.garlic.com/~lynn/subtopic.html#hacmp

and out preaching availability and scale-up on commodity priced hardware
https://www.garlic.com/~lynn/95.html#13

Jim and I had some disagreements ... of course he was with DEC at the time, and they were pitching vax/clusters.

Of course, later he was up on the stage announcing commodity priced clusters for scale-up and availability.

for other drift ... a recent thread discussing some vax/vms and mid-range mainframe market somewhat from late 70s into mid and late 80s:
https://www.garlic.com/~lynn/2006l.html#17
https://www.garlic.com/~lynn/2006l.html#18
https://www.garlic.com/~lynn/2006l.html#19

Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 05 Jun 2006 10:29:58 -0600
Ed Gould wrote:
Timothy:

I profess I have never installed VM (unless you count once 30 years ago). That being said, its never the d/l'ing that is the difficult part .. its always the "post" downloading that gets to be a PITA and always requires some (some say quite a bit) expertise. The VM system I helped set up was on a 4331 (w/3310's IIRC) was not a breeze by any stretch. IIRC it was a IPO (but I could be remembering incorrectly). But to get back to MVS the same thing can be said with a SERVPAC. I won't talk too much about servpac's as I have already indicated my dislike for them on here in the past. The d/l is almost never hard the customization is always the gotcha IMO. So please don't say the install is easy as it is only about a 1/3 (1/4?) of the job. You are making a broad statement, IMO, and putting a broad brush on the effort and thereby making it seem like any idiot can do so. This seems to be an effort by IBM (starting in the 1990's) that sysprogs are no longer needed. IBM (even in the SERVPAC classes) discussed that they are no longer needed that any joe blow can do a servpac.

I am not picking on LE but if you take the defaults that LE put out, a lot of your batch programs will not work correctly. The same can be said for other "optional" customization items that need careful monitoring at customization times. Most likely an untrained sysprog would take all the defaults. I was training a sysprog at the time of the servpac and I gave him the chore to determine which customization jobs were needed and he didn't have a clue. Like I said I don't wish to pick on ONE component but LE is a good target (sigh).

IBM (and you) seems to be sending out signals that we sysprogs are no longer needed. I am lucky to be in retirement and not have to put up with this BS from IBM anymore.


we did a lot of work for vm originally on 138/148 .... besides ecps
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

there was a lot of investigation trying to make it almost as transparently part of the machine as current day LPAR. ... basically pre-installed with lots of useability stuff on every machine that went out the door

however this was back in the days when corporate still thot they had a chance to kill vm ... and while they allowed endicott to ship ecps support, the idea that every machine that went out the door had it pre-installed was blocked (along with the lots of the usability stuff)

POK had the vm development group in burlington mall shutdown and all the people were told they had to move to POK to work on the (internal only) VMTOOL supporting mvs/xa development (justification was that mvs/xa development couldn't meet schedule unless they had all the vm developers working on it also) ... and there would be no more vm products for customers. endicott managed to pickup some of the vm370 mission and rescue some of the people from having to move to POK (although quite a few stayed in the boston area and went to places like DEC to work on what was to became VMS).

however, this (138/148) was the leading edge of some companies starting to order the boxes in large numbers. this really accelerated in the 4331/4341 time-frame where it wasn't unusual to have customers ordering the boxes in a couple hundred at a time. old reference to one such situation
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

the issue started becoming if the number of machines increase by two orders of magnitude (100 times) ... where does the two orders of magnitude increase in the number of support people come from?

this period also saw a big explosion in the number of vm/4341s on the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

which was nearly almost all vm machines already. at the time the arpanet cutover to internetworking protocol on 1/1/83, there was possibly 100 arpanet nodes with somewhere between 100 and 255 connected system hosts
https://www.garlic.com/~lynn/subnetwork.html#internet

however the internal network was almost 1000 nodes by that time ... almost all vm (and non-sna) machines. a recent thread
https://www.garlic.com/~lynn/2006k.html#40 Arpa address
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006k.html#43 Arpa address

vax was also selling into that same mid-range market (as 4331 and 4341). there was some study that 4341 was better price/performance and a claim that something like 11,000 vax sales should have been 4341s ... recent post mentioning the subject:
https://www.garlic.com/~lynn/2006l.html#17 virtual memory

however, in that period there was a SHARE study/report that found that a lot of customers were buying vax (instead of vm/4341s) because of much less resources/skills were needed to install, support and maintain the systems (although overall 4331/4341 did still sell more total than vax, in part because of large customers ordering them a couple hundred at a time).

then there was an expectation of a similar explosion in sales for the 4381 (4341 follow-on) ... but by that time customers were starting to buy workstations and high-end PCs for that market segment. you can also see similar affect on the vax sales going into the mid-80s.
https://www.garlic.com/~lynn/2006k.html#31
https://www.garlic.com/~lynn/2002f.html#0

The issue in the 80s (especially early 80s) wasn't that sysprogs were no longer (absolutely) needed ... it was if you had a customer with a couple large mainframes in a datacenter ... and they ordered 210 4341s
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

where was the additional trained staff going to come from?

Google Architecture

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 06 Jun 2006 06:55:05 -0600
Phil Payne wrote:
And Google's database(s) are also hopelessly out of date. I have logs from my web site that show Google (despite its "sitemaps" programme) simply hasn't spidered changed pages for weeks. MSN, Yahoo, Ask (and even IBM Almaden) visit much more frequently.

i have the opposite experience ... the major guys (including google) each appear to hit every html file several times a day. almaden is maybe 1/10th that ... but as i've mentioned before, i suspect they are all using it as a test/regression case (possibly because of the very large ratio of hrefs to text).

much of the information is maintained with information base technology (that i developed) ... and applications are used to generate/update the html files. for instance, the merged financial taxonomy and glossary has greater than 35,000 hrefs in a single file.
https://www.garlic.com/~lynn/index.html#glosnote
https://www.garlic.com/~lynn/financial.htm

the ietf rfc index files currently have an aggregate of nearly 125,000 hrefs.
https://www.garlic.com/~lynn/rfcietff.htm

a few recent posts mentioning the subject:
https://www.garlic.com/~lynn/2006h.html#35 64-bit architectures & 32-bit instructions
https://www.garlic.com/~lynn/2006i.html#11 Google is full
https://www.garlic.com/~lynn/2006l.html#0 history of computing

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 06 Jun 2006 07:08:27 -0600
R.S. wrote:
The only thing I can complement is there is not better engine than Google, is there ? So, I'm going to keep using google, until find something better. Can be mainframe based if you want. Or audi (car) based, I don't care.

BTW: outdated pages are quite useful somtimes. I found the information which was already deleted from original page. I can use oudated copy. I like it. Usually such pages are far from first three hits.


re:
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#7 Google Architecture
https://www.garlic.com/~lynn/2006l.html#8 Google Architecture
https://www.garlic.com/~lynn/2006l.html#24 Google Architecture
https://www.garlic.com/~lynn/2006l.html#26 Google Architecture

then there is the way-back machine. in a thread looking at UK chip&pin vulnerabilities, a URL (from a few years ago) that went into some discussion of the yes card vulnerability, appeared to have become cloaked. however it was still available at the way-back machine.

parts of fraud & vulnerability thread mentioning yes cards ... and mentioning old web page still at the way-back machine
https://www.garlic.com/~lynn/aadsm22.htm#20 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#23 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#29 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm22.htm#33 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm22.htm#34 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#39 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#40 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#47 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures

Google Architecture

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 06 Jun 2006 08:05:11 -0600
Phil Payne wrote:
Google's solution is simply not scaling. Period. Check out the complaints of massive page loss both on 28th March and 26th April.

A lot of people have been suggesting that Google might move to "mainframes" - although they don't seem to mean zSeries. Perhaps a POWER or BladeServer solution. Perhaps Superdome. Someone from IBM should be talking to these people. And probably is.

As for the currency of Google's results - they're ordure (avoiding too many netnanny bounces). I can prove it - I have the logs and I've posted details in many of the webmaster forums.

The most intensive spiderer (?) at present is Yahoo, by a country mile. The most up to date index is MSN - no doubt whatsoever about it. Ask is pretty well up there too. Google is MILES behind on both currency and content.

Search Usenet for "the Google dance". Check out
http://google.fergusons.dk every now and then.

(P.S. Object REXX is just GREAT for web server log analysis. Perhaps SAS would be better, but $$$$.)


there is possibly two different subthreads here ... the massively parallel, non-mainframe technology in use by all the major search engines are unlikely ever to migrate to mainframes ... and the quantity/quality of the google massively parallel, non-mainframe implementation vis-a-vis the quantity/quality of the other massively parallel, non-mainframe implementations.

i'm seeing nearly identical total daily hits from google and yahoo crawlers ... however, the yahoo hits are coming from a much larger number of different, unique originating ip-addresses

Mainframe Linux Mythbusting (Was: Using Java in batch

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting (Was: Using Java in batch
on z/OS?)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 06 Jun 2006 11:57:52 -0600
Marian Gasparovic wrote:
Why are mainframe people so reluctant to change ? I know cases where mainframe people refused to implement new applications, so they were implemented on different platform, old applications were removed as well as mainframe. I witnessed this situation personaly at one customer before I joined IBM. Now when I work for IBM in mainframe market I know the fights we have to fight. Mainframe platform and people are perceived as least flexible. I repeat - we are perceived as least flexible.

re:
https://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

very Boyd and OODA-loop oriented
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

I sponsored Boyd a number of times at internal corporate seminars in the early 80s (it started out being slightly less than full day, but as Boyd evolved some of his talks it began harder and harder to get it all crammed into a single day) .... some number of people would go in thinking it would be a very non-civilian oriented talk and were surprised to find how applicable it was to all sorts of business endeavors (it has since become much more acceptable to reference boyd's OODA-loop concepts in business settings, especially where rapid adaptation and agility to deal with changing circumstances is required).

having projected in the late 80s that things were headed into the red and things would have to change ... in the early 90s, we would periodically visit places like somers for discussions on the subject. nobody would really disagree that things were going to have to change ... but you go back a couple months later and found nothing was changing.

one possible observation was that there were a large number of senior people with possibly 25-30 years experience and their perceived value was based on experience in on a long standing, relatively consistent paradigm. any significant changes in the paradigm would significantly reduce the perceived value of these individuals' experience.

there appeared to be a large contingent of people that didn't disagree that things were going to have to change ... but they were doing everything possible to forestall it until at least after they, personally had retired (and then it would be somebody else's problem)

One or two CPUs - the pros & cons

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: One or two CPUs - the pros & cons
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 06 Jun 2006 15:08:00 -0600
Charles Mills wrote:
Are you sure? That's totally contrary to my impression.

There are three states for the above machine:

- both tasks waiting for I/O - one task waiting for I/O and the other task computing - either both tasks computing, or if a single CPU, one computing and the other ready-to-run and waiting for the CPU

Clearly processor speed is irrelevant to the first state. For the second state, a single, faster processor is clearly an advantage, because the single running task will run faster (and could not take any advantage of two CPUs). For the final state, you either have one task running at "200 MIPS" or two tasks running at "100 MIPS" - roughly equivalent situations from a thruput point of view. So clearly, the two 100-MIPS CPUs are no faster in the first state, slower in the second state, and no faster in the third state - and therefore almost certainly slower, not faster, overall. (Even before we consider the multi-processor overhead that you alluded to in your full post.)


for two processor SMP ... an SMP kernel can add possibly 20-30percent overhead (your mileage may vary) compared to uniprocessor kernel running on a single processor machine.

370s had extremely strong memory consistency and for cache operations ... a two-processor 370 SMP would run the processor hardware at 90 percent of a uniprocessor ... to allow for handling cross-cache consistency chatter ... so the bare two-processor hardware started out at 1.8times that of a uniprocessor. you added in typical smp kernel overhead and a two-processor smp got something like 1.5 times the thruput of a uniprocessor.

there were games I played with highly optimized and extremely efficient smp kernel processing along with some games related to cache affinity ... and sometimes you could come out with thruput greater than two times a uniprocessor (having twice the cache size and some games with cache affinity and cache hit ratios more than compensating for the base two-processor machine hardware running only 1.8times a single processor).

when 3081 came out it was only going to be in multiprocessor versions (so the uniprocessor vis-a-vis multiprocessor slow-down wasn't going to be evident). however ACP/TPF didn't have multiprocessor support and that represented a significant customer base. Frequently you found ACP/TPF running under VM on 3081 (solely using VM to managed two processor operation). eventually they were forced into coming out with the single processor 3083 ... which had individual processor that ran at nearly 15 percent faster than 3081 processor (because of the elimination of the slow-down provisions for cross-cache chatter)

running the processors (in multiprocessor mode) at only .9 (that of uniprocessor to allow for cross-cache chatter) was only the start. any actual cross-cache chatter could result in even further hardware thruput degradation. going to the four-processor 3084 ... the amount of cross-cache chatter effects got worse (in the two-processor case, a cache was getting signals from one other cache, in the four-processor case, a cache was getting hit with signals from three other caches).

in that time-frame you saw both the VM and MVS kernels restructured so that internal kernel structures and storage management was carefully laid out on cache-line boundaries and done in multiples of cache-lines ... to reduce the impact of stuff like cache-line thrashing. That restructuring supposedly got something like a five percent overall system thruput increase.

there was some joke that to compensate for the smp cache effects, the 3090 caches used a machine cycle ten times faster than that of the 3090 processor machine cycle.

there can be secondary considerations. in the 158/168 time-frame ... 370/158-3 at around 1mip processing was about at the knee of the price/technology curve. the 370/168-3 at around 3mip processing was way past the knee of price/technology curve ... and cost significantly more to build/manufacture.

at one point we had a project called logical machines to build a 16-way smp using 158-3 engines ... that still cost less to manufacture (parts plus build) than a single processor 168. we were going great guns ... until some upper executive realized that MVS was never going to be able to ship 16-way SMP support within the life-time of the project and killed the effort (in part because it wouldn't look good if there was a flagship 16-way smp hardware product and there was no MVS capable of running on it). we had also relaxed some cache consistency requirements ... that made it much less painful getting to 16-way smp operation.

something similar to your description of machine processing states has also been used in the past to describe the overhead of virtual machine operation. if the guest is in wait state ... the amount of virtual machine overhead is zero. if the guest is executing only problem state instructions, the virtual machine overhead is zero. it isn't until you start executing various supervisor state instructions that you start to see various kinds of virtual machine overhead degradation.

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 07 Jun 2006 06:19:23 -0600
Phil Smith III wrote:
No, Google succeeds because "Good enough is good enough" (SM, me). It works well enough to satisfy end-users, so they use it. Yes, Big Daddy is a problem; yes, many webmasters are unhappy; but Google continues to work well enough to power the Internet economy (high-falutin' words, but, in my experience, NOT overblown). The complaints about varied results, stale pages, intermittent spidering, etc. go back to ... Well, forever. They're evidence of suboptimal-ness, not "failure" or "a sham". I'm not in love with Google, have no stake in them (wish I did!), but the vitriol heaped upon them is unreasonable. Google works, period. 10**n successful searches per day prove that empirically.

about ten years ago, i had opportunity to spend some time with people at NIH's national library of medicine. at the time, they had a mainframe bdam implementation that dated from the late 60s. two of the people that had worked on the original implementation from the 60s were still around. we were able to exchange some war stories ... because i had an opportunity to be at university that was involved in the original cics product beta test. they had an onr grant to do a library project and i got to shoot cics and bdam bugs.

at the time, somebody commented that there was something like 40k professional nlm librarians world-wide. the process was that they would sit down with a doctor or other medical professional for a couple hrs, take down their requirements and then go off for 2-3 days and do searches ... eventually coming back with some set of results.

nlm had passed the search threashold of extremely large number of articles back around 1980 and had a severe bimodel keyword search problem. out to five to eight keywords ... there would still be hundreds of thousands of responses ... and then adding one more keyword (to the search) would result in zero responses. the holy grail of large search infrastructures has been to come up with the number of responses greater than zero and less than a hundred.

in early 80s, nlm got a interface, grateful med. grateful med could ask for the return of the number of responses ... rather than the actual responses. grateful med would keep tracks of searches and count of responses. the person doing the search seemed to involve a slightly, semi-directed random walk ... looking for a query that met the holy grail ... greater than zero responses and less than one hundred.

finding acceptable responses is problem common to most environments after the number of items pass billions, regardless of the implementation platform.

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
CC: ibmmain <ibm-main@bama.ua.edu>
Date: Wed, 07 Jun 2006 08:01:16 -0600
Anne & Lynn Wheeler wrote:
then there is the way-back machine. in a thread looking at UK chip&pin vulnerabilities, a URL (from a few years ago) that went into some discussion of the yes card vulnerability, appeared to have become cloaked. however it was still available at the way-back machine.

re:
https://www.garlic.com/~lynn/2006l.html#27 Google Architecture

for even more drift ... a news item from later yesterday
UK Detects Chip-And-PIN Security Flaw
http://www.cardtechnology.com/article.html?id=20060606I2K75YSX

APACS says the security lapse came to light in a recent study of the authentication technology used in the UK's new "chip-and-PIN" card system.


... snip ...

and some comment
https://www.garlic.com/~lynn/aadsm23.htm#55 UK Detects Chip-And-PIN Security Flaw

not too long after the exploit (from earlier deployments) being documented in 2002 ... it was explained to a group from the ATM industry ... leading somebody in the audience to quip
do you mean that they managed to spend a couple billion dollars to prove that chips are less secure than magstripes

.

Google Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 07 Jun 2006 18:28:58 -0600
Anne & Lynn Wheeler wrote:
for even more drift ... a news item from later yesterday

UK Detects Chip-And-PIN Security Flaw
http://www.cardtechnology.com/article.html?id=20060606I2K75YSX

APACS says the security lapse came to light in a recent study of the authentication technology used in the UK's new "chip-and-PIN" card system.

... snip ...

and some comment
https://www.garlic.com/~lynn/aadsm23.htm#55 UK Detects Chip-And-PIN Security Flaw

not too long after the exploit (from earlier deployments) being documented in 2002 ... it was explained to a group from the ATM industry ... leading somebody in the audience to quip do you mean that they managed to spend a couple billion dollars to prove that chips are less secure than magstripes.


re:
https://www.garlic.com/~lynn/2006l.html#32 Google Architecture

a little drift back to ibm:
http://www-03.ibm.com/industries/financialservices/doc/content/solution/1026217103.html gone 404, but lives on at wayback machine
https://web.archive.org/web/20061106193736/http://www-03.ibm.com/industries/financialservices/doc/content/solution/1026217103.html

from above:
Safeway and its technology partner IBM were involved in the first 'Chip and Pin' trials held in the UK in 1997. Recently, Safeway engaged IBM again to provide the Electronic Payment System (EPS) infrastructure in support of the company's push forward with the introduction of 'Chip and Pin.'

... snip ...

the 2002 article mentioning yes card vulnerability describes exploit involving chip&pin deployments in 2002 and earlier.

the most recent article yesterday describes the current chip&pin deployment apparently with the same vulnerability as described in the 2002 article mentioning yes card exploits.

for the yes card exploits in the 90s and thru (at least) 2002, technology that had been around for some time involving compromised and/or counterfeit terminals (that had been havesting magstripe and pins used for creating counterfeit debit cards) ... was adapted to harvesting chip&pin "SDA" data.

The harvested "SDA" data was then loaded into a counterfeit chip. The terminal chip&pin standard called for authenticating the card and then asking the card 1) if the entered PIN was correct (YES), 2) if the transaction was to be offline (YES), and 3) if the transaction was within the account credit limit (YES). The harvested "SDA" data was sufficient for a counterfeit card to fool a terminal ... and then the counterfeit card was programmed to always answer YES to all the terminal's questions (resulting in giving the counterfeit card the yes card name). It was not actually necessary to harvest the valid PIN since the counterfeit yes card would always claim any entered PIN was valid.

Part of the issue with the yes card vulnerability was that it was an attack on the terminal (not the card). Regardless of how valid chip&pin cards had been programmed ... once an attacker had havested "SDA" data ... it was able to create counterfeit yes cards (and attack chip&pin terminals).

somewhat related posting
https://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-and-PIN Security Flaw

Dual Core CPUs are slower than Dual Single core CPUs ??

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Dual Core CPUs are slower than Dual Single core CPUs ??
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 08 Jun 2006 11:12:44 -0600
"Mayank" writes:
I believe, if indpendent single threaded applications need to be run, then SMP (dual single core processors) shall provide a better throughput.

CMP (dual core processors) have more than one execution unit on the same die, possibly sharing L2 cache and FSB. CMP (and so did SMT) probably came into existance because Instruction Level Parallelism was not providing further parallelism (with power playing an important role). If multi-threaded applications need to be run, then CMP in most cases shall provide better throughput. This is due to inter thread data sharing which shall hide the memory latencies.


and the multiple threads are conserving cache lines in many cases by making use of the exact same data (so you may be getting higher per instruction cache hit ratio for the same number of cache lines).

there is an analogy to this from long ago and far away involving real storage for tss/360 paging (from the 60s). tss/360 was originally announced to run on a 512kbyte 360/67 ... but the tss/360 (fixed) kernel was rapidly growing. eventually the minimum was 768kbytes and to really get anything done with tss/360 you needed 1024kbytes (largest memory configuration).

then benchmarked two processor tss/360 on a two processor 360/67 with two megabytes of real storage (each processor came with 1mbyte max. and multiprocessor support allowed the addressing to be linear) ... and tss/360 thruput was coming out around 3.5times that of tss/360 uniprocessor operation.

somebody made the claim that tss/360 scale-up, multiprocessor support and algorithms were obviously the best in the industry ... being able to get 3.5 times the thruput with only two times the resources.

it turns out that it was a relative measurement, both tss/360 uniprocessor and multiprocessor thruput was quite bad ... using an absolute measure (as opposed to purely relative measurement).

the issue was that the tss/360 kernel requirements had grown so that if you attempted to perform almost any operations ... with the amount of real storage left over for paging in a 1mbyte configuration ... would page thrash. with double the real storage (2mbytes) ... the amount of real storage left over for application paging increased by a factor of 5-10 times (compared to single processor, 1mbyte configuration) ... resulting in tss/360 seeing 3.5 times the aggregate thruput (in two processor configuration) relative to single processor configuration (however, neither numbers were actually that remarkable).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Token-ring vs Ethernet - 10 years later

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Token-ring vs Ethernet - 10 years later
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 08 Jun 2006 15:03:23 -0600
Gilbert Saint-Flour wrote:
The last three examples were sponsored (or developed) by IBM, and many IBM competitors supported the non-IBM solution precisely because it was that, non-IBM. In the case of Micro-channel and OS/2, licensing issues didn't help with PC companies like Compaq and HP.

TR also got a lot of bad press because a single PC could wreak havoc on the ring simply because it was configured for 4 Mb/s instead of 16 Mb/s, and finding the culprit was sometimes quite a challenge. Ethernet, of course, had a lot of problems of its own, but it didn't have this one.


there are a whole bunch of issues.

as part of the SAA terminal emulation strategy,
https://www.garlic.com/~lynn/subnetwork.html#emulation

the T/R cards were built with per adapter thruput targeted at the terminal emulation market segment (say 300 PCs on the same ring). austin had designed & built their own 4mbit t/r (16bit isa) for workstation environments. for rs/6000 they were forced to use the corporate standard 16mbit microchannel t/r card. this card had lower per card thruput than the pc/rt 4mbit t/r card (they weren't allowed to do their own 16mbit microchannel t/r card that had even the same per card thruput as their 4mbit 16bit ISA t/r card).

as part of moving research up the hill from sjr to alm, the new alm building had extensive new wiring. however, in detailed tests they were finding that 10mbit ethernet had higher aggregate thruput and lower latency over the CAT4 wiring than 16mbit t/r going over the same CAT4 wiring.

in the SAA time-frame we had come up with 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier

and were out pitching it to customer executives .... including examples showing 10mbit (CAT4 wiring) ethernet deployments compared to 16mbit t/r deployments (using same CAT4 wiring).

we were taking lots of heat from SAA forces which were actively trying to contain 2-tier/client-server and return the paradigm to the terminal emulation from the first half of the 80s (so you didn't need faster per card thruput because you were stuck in terminal emulation paradigm and you were stuck in terminal emulation paradigm because of the limited per card thruput).

we were also taking lots of heat from the t/r contingent. one of the t/r centers had published a paper showing 16mbit t/r compared to "ethernet" ... with ethernet degrading to less than 1mbit aggregate effective thruput. it appeared to be using the ancient 3mbit ethernet specification which didn't even include listen before transmit (part of the 10mbit standard).

about the same time, annual acm sigcomm had a paper that did some detailed look at commoningly deployed ethernet. one of the tests had 30 stations in tight low-level device driver loop transmitting minimum sized packets as fast as possible. In this scenario, effective aggregate thruput of 10mbit ethernet dropped off to 8.5mbits from a normal environment with effective aggregate thruput around 9.5mbits.

disclaimer: my wife is co-inventor for token passing patents (US and international) from the late 70s.

Token-ring vs Ethernet - 10 years later

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Token-ring vs Ethernet - 10 years later
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 08 Jun 2006 18:42:55 -0600
Charles Mills wrote:
Price and also simplicity of implementation.

Price is especially significant when people are tip-toeing into something not sure if they are going to like it - that was the case with VHS and Beta. "I'll get one of these cheap VHS VCRs, and if I like it, I'll get a good Beta later." Of course, once they had a library of VHS tapes, "later" never came.


re:
https://www.garlic.com/~lynn/2006l.html#35 Token-ring vs Ethernet - 10 years later

microchannel 16mbit t/r card were going for something like $900/card

aggregate 16mbit t/r LAN had lower aggregate network thruput than 10mbit ethernet

individual microchannel 16mbit t/r adapter card had lower per card thruput than almost any enet cards and even lower per card thruput than the pc/rt 4mbit t/r 16bit isa card.

...................

there were 10mbit 16bit isa ethernet cards with numerous different chips ... intel (82586), amd (lance), and several others ... with list price in the $100-$200/card range but some starting to show up with street price approaching $50.

ethernet lan had normal aggregate effective thruput in the 9.5mbit range (was both higher aggregate thruput and lower latency than 16mbit t/r)

you could get effective 9.5mbit thruput from large percentage of many of these cards ... equivalent to aggregate effective lan thruput

...................

the 16mbit microchannel t/r cards had target design of terminal emulation with something like 300 sharing the same lan. it wasn't necessary that any individual card have any significant effective thruput (and went for possibly 10 times the price of competitive enet cards).

as more environments started moving from terminal emulation paradigm to client/server paradigm ... you were starting to have server asymmetric bandwidth requirements with individual (server) adapter card thruput equivalent to aggregate lan thruput ... i.e. servers needed to have thruput capacity equal to aggregate requirements of all clients.

there were no token-ring card products that were designed to satisfy the server side of the client/server asymmetric bandwidth requirement (i.e. single adapter card capability of sustaining full aggregate lan bandwidth) ... while a large number of the enet card products provided individual cards that were capable of sustaining peak aggregate lan thruput (and went for as little as almost 1/20th the equivalent 16mbit mircochannel t/r cards).

since a large percentage of all the enet card products could sustain 9.5mbit ... they worked well for both server deployments ... as well as peak instantaneous client use.

Google Architecture

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Google Architecture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
CC: ibmmain <ibm-main@bama.ua.edu>
Date: Fri, 09 Jun 2006 06:34:12 -0600
re:
https://www.garlic.com/~lynn/2006l.html#27 Google Architecture
https://www.garlic.com/~lynn/2006l.html#32 Google Architecture
https://www.garlic.com/~lynn/2006l.html#33 Google Architecture

oh and late breaking topic drift:

Bank admits flaws in chip and PIN security
http://www.dailymail.co.uk/pages/live/articles/news/news.html?in_article_id=385811&in_page_id=1770
Millions at risk from Chip and Pin
http://www.thisismoney.co.uk/saving-and-banking/article.html?in_article_id=409616&in_page_id=7
Millions in danger from chip and pin fraudsters
http://www.dailymail.co.uk/pages/live/articles/news/news.html?in_article_id=389084&in_page_id=1770&in_a_source=
UK Banks Expected To Move To DDA EMV Cards
http://www.epaynews.com/index.cgi?survey=&ref=browse&f=view&id=11497625028614136145&block=

and some comments:
https://www.garlic.com/~lynn/aadsm24.htm#1 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#2 UK Banks Expected To Move To DDA EMV Cards

Token-ring vs Ethernet - 10 years later

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Token-ring vs Ethernet - 10 years later
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
CC: ibmmain <ibm-main@bama.ua.edu>
Date: Fri, 09 Jun 2006 08:25:43 -0600
Anne & Lynn Wheeler wrote:
as more environments changed from terminal emulation paradigm to client/server paradigm ... you were starting to have server asymmetric bandwidth requirements with individual (server) adapter card thruput equivalent to aggregate lan thruput ... i.e. servers needed to have thruput capacity equal to aggregate requirements of all clients.

re:
https://www.garlic.com/~lynn/2006l.html#35 Token-ring vs Ethernet - 10
years later
https://www.garlic.com/~lynn/2006l.html#36 Token-ring vs Ethernet - 10 years later

the SAA drive was controlling feature/function as part of trying to maintain the terminal emulation paradigm and forestall transition to 2tier/client-server
https://www.garlic.com/~lynn/subnetwork.html#emulation

.... or what we were out doing, pitching 3-tier architecture and what was to become middleware (we had come up with 3-tier architecture and was out pitching it to customer executives and taken heat from the SAA forces)
https://www.garlic.com/~lynn/subnetwork.html#3tier

consistent with the SAA drive and attempts to maintain the terminal emulation paradigm was the low per card effective thruput and recommended configurations with 100-300 machines sharing the same 16mbit t/r lan (although effective aggregate bandwidth was less than 8mbit with typical configurations dividing that between 100-300 machines).

for some drift, the terminal emulation paradigm would have been happy to stick with the original coax cable runs ... but one of the reasons for the transition to t/r supporting terminal emulation paradigm was that there were large number of installations running in lb/sq-ft loading problems from the overloaded long cable tray runs (there had to be physical cable running from the machine room to each & every terminal).

all of this tended to fence off the mainframe from participating in the emerging new advanced feature/functions around client/server paradigm. enforcing the terminal emulation paradigm was resulting in the server feature/function being done outside of the datacenter and loads of datacenter corporate data leaking out to these servers.

this was what had prompted a senior person from the disk division to sneaking a presentation into the communication group's internal, annual world-wide conference, where he started out the presentation by stating that the communication group was going to be responsible for the demise of the mainframe disk division. recent reference
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture

for some additional drift, we sporadically claim that the original SOA (service oriented architecture) implementation was the payment gateway.

we had come up with 3-tier architecture (and what was going to be called middleware) and had also done our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

we were later asked to consult with a small client/server startup that wanted to perform payment transactions on their server.

turns out that two of the people from this ha/cmp meeting
https://www.garlic.com/~lynn/95.html#13

were now at this small client/server startup and responsible for something that was being called the commerce server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

Token-ring vs Ethernet - 10 years later

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Token-ring vs Ethernet - 10 years later
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 09 Jun 2006 12:58:53 -0600
R.S. wrote:
I don't know VHS-Beta war, but in times of early LANs a cost of TR adapter was approx. $800, while Novell NE2000 was less than $100. That's the difference. For *small* networks both standards were OK. Additionally Eth didn't need any additional box, while TR required MAU. Hint: it was ethernet over coaxial cable - no repeater/hub needed (switch was not born yet).

Novell bought Eagle company just to produce cheap networking cards. To make LAN available for everyone. They did the revolution. In the best years Novell had approx. 70% of file server marketshare (and over 90% in Poland).


for more folklore about san jose disk division attempting to provide client/server oriented facilities
https://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later

in the very early 80s, san jose had a project called DataHub and part of the code implementation was subcontracted out to an organization in Provo. One of the DataHub people in San Jose was making nearly weekly flights to Provo.

At some point corporate dictated abandoning the project ... and the organization in Provo was allowed to assume rights to all of the code that they had been contracted to implement.

misc. past posts mentioning DataHub
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
https://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
https://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#36 Intel strikes back with a parallel x86 design

virtual memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 09 Jun 2006 22:02:45 -0600
"Del Cecchi" writes:
And the execs limited the size of machines that the Farmers on the Tundra could build. S/38 was bumping up against the top of the bracket for years. Both S/36 and S/38 were problems that way. And when there was a choice in later years no one wanted small 370 machines. Remember Racetrack?

re:
https://www.garlic.com/~lynn/2006l.html#19 virtual memory

not only did workstations and high-end PCs start to take over the mid-range ... but also the low-end. this started to happen before racetrack ... I claim that is also why 4381 (4341 follow-on) didn't see the continued explosive growth that the 4341 experienced. the change in the mid-range market also impacted vax. previous post
https://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

One or two CPUs - the pros & cons

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: One or two CPUs - the pros & cons
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 10 Jun 2006 15:50:17 -0600
Ted MacNEIL wrote:
Take a look at LSPR. z/990 2084-308 -- 2993 MIPS z/990 2084-309 -- 3299 MIPS

Difference: 306 MIP with the engine nominally 450 MIPS.


the redbook "effective zseries performance monitoring using resource measurement facility" gives LSPR ratios for lots of stuff.
http://www.redbooks.ibm.com/abstracts/sg246645.html

for mixed-mode workload, 2084-332 is around 20 times the thruput of 2084-301; the 2084-302 is 1.9 times the thruput of 2084-301 for the same workload. The ratio of thruput increase to number of processors is then 1.9/2 = 95% (for 2084-302) and 20/32 = 62.5% (for 2084-332).

your number of 2084-308 with 2993/450 gives a ratio of 6.65 and for 2084-309 the ratio 3299/450 is 7.33.

the overall MIP ratio to processors of 6.65/8 is 83% and 7.33/9 is 81%. however, the incremental MIP ratio of 306/450 is 68%

from a separate source that provided 2094-7xx "SI MIPs"

for 2094-701, SI MIPs is 608 and 2094-702, SI MIPs is 1193 or an increase of 585 SI MIPs

for 2094-731, SI MIPs is 11462 and for 2094-732, SI MIPs is 11687 or an increase of 225 MIPs

the MIP ratio of 2094-732/2094-701 or 11687/608 is 19.22 ... or approx the same thruput ratio as given for 2084-332 compared to 2084-301 (i.e. the incremental 225 MIPs going from 31 to 32 results in overall effective ratio consistent with other LSPR thruput ratio comparisons).

for 2094-732, the overall SI MIP ratio to processors of 19.22/32 is 60% however the incremental SI MIP ratio of 225/608 (i.e. adding an additional processor going from 31 to 32) is 37%.

i.e. the overall increase in processor complex thruput is increasing at a slower rate than the increase in the number of processors (i.e. by 32 processors, the overall processor complex is 60% of 32 single processors). however, the incremental benefit of adding one additional processor is declining even faster (the incremental benefit of going from 31 processors to 32 processors is only 37% of a full single processor).

so possible interesting additional columns in tables for the redbook might be 1) the current thruput ratio number (to single processor) but divided by the number of processors and 2) incremental thruput divided by the thruput of a single processor.

This would show the percentage overall effective use of the complex (vis-a-vis a single processor) as the number of processors increased, plus the incremental benefit of adding one additional processor (compared to a simple single processor).

for all i know, such information is already in the redbook ... I just didn't see it as I was quickly scanning thru.

my previous post
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons

mentioned two-processor 370 have 1.8 hardware MIP of a one processor since each machine ran at .9 cycle to allow for cross-cache chatter ... but typical thruput was in the 1.5-1.6 range because of the additional kernel overhead required to keep multiple processors coordinated.

The very first text editor

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The very first text editor
Newsgroups: alt.folklore.computers
Date: Sat, 10 Jun 2006 16:13:32 -0600
Elliott Roper writes:
4 things with only 95% chance of consistently working piped together equals 81% chance of the whole mess working. it sounds like a cheap shot, but it was my thinking at the time.

actually had a business person reporting oriented application happen in the early 90s on unix. shell script that did some stuff, piped it to sort which then piped it to some other stuff.

in business production, batch environment all sorts of things get reported, excepted, and even automagic features developed for handling wide range of exceptions, etc ... somewhat evolving over 50 years with the assumption that the person responsible for the application is not present (there may or may not be people present when the application actually ran ... but there definitly was an orientation that the responsible individuals weren't present and therefor it was necessary to evolve the infrastructure for supporting paradigm w/o the responsible people around).

in contrast, most of the infrastructures that have evolved from interactive platform have tended to make assumptions about responsible people are present and can take whatever corrective actions that are necessary. such infrastructures can be more "user friendly" for some things ... but can be enormously difficult to adapt to a automated "lights out" operation (like huge web farms).

so, in this particular example, it turned out that (for whatever reason) sort exhausted the available file space ... but there was no exception for it and/or anything to catch the exception even if there had been an exception ... and the process continued happily on ... but with only about 20 percent of the people ... including eventually replacing the original file.

then of course when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we found ourselves talking to some places with 5-nines (aggregate system .99999) availability requirements (five minute total outage per year).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

One or two CPUs - the pros & cons

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: One or two CPUs - the pros & cons
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 10 Jun 2006 20:14:06 -0600
Gerhard Adam wrote: What are you losing? It isn't as if these processors are off playing solitaire. They're paying the cost of communication to allow more simultaneous operations for YOUR workload. The primary benefit of this approach is to reduce the queueing impacts of multiple units of work competing for a finite resource. If you don't think this is a reasonable exchange, there is nothing prohibiting you from running your workload on a series of uniprocessors that fully exploit their "MIPS" rating.

This issue of "losing" resources is a false one. The implication is that somehow this is being down on purpose. The resources aren't lost, but rather redirected to accommodate the increased complexity of the system. There is virtually nothing I can think of that scales upwards without a loss of either efficiency, cost, or complexity.

couple previous postings in this thread
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros and
cons
https://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros and cons

minor topic drift, for a long time the corner stone of SMP operation was compare&swap instruction. at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

charlie had been working on SMP efficiency and fine-grain locking with CP67 on the 360/67. He invented the compare&swap instruction (mnemonic chose because CAS are charlie's initials). the first couple trips to POK trying to get compare&swap into the 370 architecture were not successful. we were told that the mainstream POK operating systems didn't care about CAS ... that they could perfectly well get by with TS (test-and-set). In order to get CAS included in the 370 architecture ... a non-SMP application for CAS would have to be created. Thus was born the descriptions about how to use various flavors of CAS in enabled, multi-threaded application code (whether running on single process or SMP, multiprocessor configurations). The original descriptions were part of the instruction programming notes ... but in later principles of operation were moved to the appendix. misc. past posts on smp, compare&swap, scale-up, etc
https://www.garlic.com/~lynn/subtopic.html#smp

tightly-coupled tends to assume extremely fine grain communication and the coordination overhead reflects. loosely-coupled tends to have much courser grained coordination. given that your workload can accommodate courser grained coordination ... a few 20-processor complexes in a loosely-coupled environment ... may, in fact, provide overall better thruput than a single 60 processor operation (where the incremental benefit of each additional processor may be getting close to 1/3rd of a single processor by the time you hit 32 processor configuration).

we saw that in the late 80s when we got involved in both fiber channel standard effort as well as the scalable coherent interface standard effort.

FCS was obviously a loosely-coupled technology ... which we worked on when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

also minor reference here
https://www.garlic.com/~lynn/95.html#13

One of the engineers in austin had taken some old fiber optic communication technology that had been laying around POK since the 70s (eventually announced as escon on mainframes) and did various tweaks to it ... got it running about ten percent faster effective thruput, and adapted some optical drivers from the cdrom market segment that were less than 1/10th the cost of the drivers that had been defined in POK. This was adapted for full-duplex operation (simultaneously full bandwidth transmission in both directions) and released as SLA (serial link adapter) for rs/6000. Almost immediately he wanted to start on a proprietary version of it that would run 800mbits (simultaneously in both directions). Since we had been working with the FCS standards operation, we lobbied long and hard to drop any idea of doing a propriety definition and instead work on the FCS standard (1gbit, full-duplex, simultaneously in both direction). Eventually he agreed and went on to become the editor of the FCS standards document.

SCI could be used in purely tightly-coupled operation ... but it had a number of characteristics which also could be used to approximate loosely-coupled ... and then there were the things in-between ... for NUMA (aka non-uniform) memory architecture.

SCI could operate as if it was memory references ... but provide a variety of different performance characteristics (somewhat analogous to old 360 LCS ... where some configurations used it as extension of memory for standard execution and other configurations used it like electronic disk .... more akin to 3090 expanded store).

sequent and dg took standard four intel processor shared memory boards ... and configured them on the 64-port SCI memory interface for a total of 256 processors that could operate as a shared memory multiprocessor.

convex took two HP processor shared memory boards ... and configured them on the 64-port SCI memory interface for a total of 128 processors that could operate as a shared memory multiprocessor.

while background chatter for sci is very low ... actually having a lot of different processors hitting the same location constantly can degrade much faster than more traditional uniform memory architecture. at some point the trade-off can cross.

so partitioning can be good ... convex took and adapted MACH for the exemplar. one of the things they could do to cut down fine grain coordination scale-up issues is partition the exemplar into possibly 5-6 twenty processor shared memory multiprocessor ... then they could simulate loosely-coupled communication between the different complexes using synchronous memory copies.

this was partially a hardware scale-up issue ... scaling shared kernel that was constantly hitting same memory locations from a large number of different real processors ... and partially using partitioning to manage complexity growth. this is somewhat like LPARs are used to partition to manage complexity of different operations that may possibly have somewhat different goals ... which would be a lot more difficult using a single system operation.

for other historical topic drift ... MACH was picked up from CMU ... someplace that andrew file system, andrew windows & widgets, camelot, etc had come out of. In this period there was Project Athena at MIT ... jointly funded by DEC and IBM to the tune of $25m each (from which came Kerberos, X, and some number of other things). While IBM funded CMU to the tune of $50m. Mach was also picked up at the basis for NeXT and later for apple operating system (among others). some misc. kerberos refs:
https://www.garlic.com/~lynn/subpubkey.html#kerberos

LANL somewhat sponsored/pushed HiPPI thru standards organization (as standard of Cray's copper parallel channel). LLNL somewhat sponsored/pushed FCS thru standards organization (as a fiber version of a serial copper connectivity that they had deployed). And SLAC somewhat sponsored/pushed SCI thru the standards process.

misc. old posts mentioning HiPPI, FCS, and/or SCI
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005n.html#6 Cache coherency protocols: Write-update versus write-invalidate
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?

The very first text editor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The very first text editor
Newsgroups: alt.folklore.computers
Date: Sun, 11 Jun 2006 07:28:46 -0600
jmfbahciv writes:
Oh, cool. YOu have just described the difference between what DEC was good at and what IBM was good at..timesharing vs. IBM's batch-mode. I don't think DEC ever learned how to design "lights-out" production. It knew how to do real-time such as instrumentation and data capturing but that's not real data processing. Our folklore was based on people attending computers all the time if it was for general usage.

re:
https://www.garlic.com/~lynn/2006l.html#42 The very first text editor

note that the time-sharing stuff
https://www.garlic.com/~lynn/submain.html#timeshare

originating out of the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

was totally different than the batch stuff that the company was better known for. however, the dominance of the perception about the batch stuff seemed to frequently totally obscure the fact that the size of the company's time-sharing install market was actually larger than any other companies' time-sharing install (or even total install) ... i.e. past comments that just the number of vm/4341 installs were comparable to the total number of vax machine sales (in the late 70s and early 80s).
https://www.garlic.com/~lynn/2006j.html#23 virtual memory
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006k.html#31 PDP-1
https://www.garlic.com/~lynn/2006l.html#16 virtual memory
https://www.garlic.com/~lynn/2006l.html#17 virtual memory
https://www.garlic.com/~lynn/2006l.html#18 virtual memory
https://www.garlic.com/~lynn/2006l.html#19 virtual memory
https://www.garlic.com/~lynn/2006l.html#24 Google Architecture
https://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#40 virtual memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 Jun 2006 07:21:59 -0600
Paul Gilmartin wrote:
Can you envision running the Internet on SNA?

o 8-character flat namespace?

o No DNS?

Or am I mistaking attributes of VTAM for SNA? (But still, where's SNA's DNS?)


SNA isn't networking ... at least in the sense used by most of the rest of the world. SNA is quite good at managing large number of terminals ... or things effectively emulating large number of terminals
https://www.garlic.com/~lynn/subnetwork.html#emulation

in heavily SNA centric environment, the term "peer-to-peer" networking was invented to describe standard networking (as understood by most of the rest of the world) differentiated from SNA communication infrastructures.

in the early SNA days, my wife had co-authored peer-to-peer networking architecture with Bert Moldow ... AWP39 (which never got announced as product, sna group possibly viewed it as competition). Later, when she was con'ed into going to POK to be in charge of loosely-coupled architecture ... she originated Peer-Coupled Shared Data
https://www.garlic.com/~lynn/submain.html#shareddata

which didn't see a lot of uptake ... except for the guys doing IMS hot-standby ... at least until parallel sysplex came along.

the closest thing to networking within any kind of SNA context was AWP164 ... which the SNA organization non-concurred with announcing. After some escalation, AWP164 announcement letter ... "APPN" was carefully crafted to not imply any relationship between AWP164/APPN and SNA.

I used to chide the person responsible for AWP164 that he was wasting his time trying to craft networking into SNA context ... they were never going to appreciate and/or accept him ... he would be much better off spending his time working within a real networking context like the internet.

note that the explosion in the internal corporate network in the 70s and early 80s ... wasn't an SNA implementation either .... however it had a form of gateway capability implemented in every node. slight drift ... it was another product brought to you courtesy of the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

as was virtual machines, the (smp) compare&swap instruction, the invention of GML (original ancestor of sgml, html, xml, etc), and numerous interactive technologies. i've also asserted that all the performance measurement, modeling, workload profiling, etc, etc ... evolved into what is now called capacity planning
https://www.garlic.com/~lynn/submain.html#benchmark

in any case (in part because of the gateway like function), the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than arpanet/internet from just about the beginning until possibly mid-85. the "arpanet" got its gateway capability with the great switchover to internetworking protocol on 1/1/83.
https://www.garlic.com/~lynn/subnetwork.html#internet

recent thread that discussed the size of the internal network vis-a-vis the size of the arpanet
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2006j.html#45 Arpa address
https://www.garlic.com/~lynn/2006j.html#46 Arpa address
https://www.garlic.com/~lynn/2006j.html#49 Arpa address
https://www.garlic.com/~lynn/2006j.html#50 Arpa address
https://www.garlic.com/~lynn/2006j.html#53 Arpa address
https://www.garlic.com/~lynn/2006k.html#3 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006k.html#10 Arpa address
https://www.garlic.com/~lynn/2006k.html#12 Arpa address
https://www.garlic.com/~lynn/2006k.html#40 Arpa address
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006k.html#43 Arpa address

misc. past posts mentioning AWP39 and/or awp164 (appn):
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005u.html#23 Channel Distances
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
https://www.garlic.com/~lynn/2006j.html#31 virtual memory
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture

Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 Jun 2006 07:49:59 -0600
Ed Gould wrote:
I don't think SNA has anything like a DNS (warning my info is old). The last time I did a 3745 gen you had to hardcode a lot of subareas. Although I do think they have updated it since then (hope so anyway). There were some route tables that could get hairy. I had access to the RTG tool and it made a complicated map reasonably easy. IIRC, SNI was another mess that helped, but it was still complicated. JES2 could add complexity as he could start routing output via another node that you didn't expect if you weren't careful. To most (all?) nodes in my 200+ node JES2 network I turned off the JES2 routing as we were connected all over the place and I did not want the output to be done through a 3rd party node.

I suppose if the nodes were all one company it wouldn't make a difference. But financial information was too important to let others see it.


re:
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

a lot of the original hasp/jes2 networking code running originally on the internal network still carried the "TUCC" identifier on source code "cards".

HASP had a 255 entry table for psuedo devices. the hasp/jes2 support started out defining networking nodes using unused entries in the psuedo device table. that typically allowed JES2 to define something like 160-200 networking nodes. misc. past hasp &/or jes2 postings
https://www.garlic.com/~lynn/submain.html#hasp

by the time JES2 networking was announced as a product, the internal network had over 255 nodes
https://www.garlic.com/~lynn/subnetwork.html#internalnet

and by the time JES2 had support for 999 nodes, the internal network was over 1000 nodes, and by the time JES2 had support for 1999 nodes, the internal network was over 2000 nodes. JES2 would trash any network traffic that came thru the node, where JES2 didn't have the destination node in its local table. However, JES2 also would trash any network traffic where the originating node wasn't in its local table. This made JES2 almost unusable on the internal network except as carefully controlled end-node (not as any sort of intermediate store&forward node).

the other problem was that JES2 network architecture was notorious for getting networking mixed up with work load processing. relatively minor changes in header formats between releases could result in one JES2 system crashing another JES2 (and associated MVS) system. there is an infamous case on the internal network where a JES2 system in San Jose was causing MVS system crashes in Hursley.

since the primary mainstay of the internal network had implemented gateway-like capability ... it also implemented a large array of different interfaces/gateways to JES2 systems .... allowing JES2 networking some modicum of participation in the internal network. because of the problem with one jes2 system causing other jes2/mvs systems to crash (due to even minor from changes in version/releases) ... there grew up compensating processes in the internal network jes2 gateway interfaces. basically a canonical jes format representation. An internal network interface that talked directly to a real JES2 node ... would be specific to that version/release of jes2 ... and eventually had the code that converted from canonical JES2 format to the format needed by that specific JES2 system (as countermeasure preventing different JES2 systems causing each other to crash).

....

as an aside ... the corporate requirements for the internal network required all transmission leaving a corporate facility to be encrypted. at one point there was the claim that the internal network had over half of all the link encryptors in the world. one of the harder issues getting internal network connectivity in various parts of the world was the issue of encrypted links crossing national boundaries ... it was frequently a really tough sell to get two (or more) different government agencies to all agree that a link going from one corporate location (in one country) to another corporate location (in a different country) could be encrypted.

disclaimer ... during part of this period my wife served a stint in the g'burg jes group.

One or two CPUs - the pros & cons

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: One or two CPUs - the pros & cons
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 Jun 2006 08:13:01 -0600
Brian Westerman wrote:
This should not be the case. You probably have some other problem that you see manifesting itself as a big loss of capacity. The actual loss should be quite low on a z/series box, (in the area of 1% to 2% depending on a number of factors).

If you are indeed getting those results then something is terribly wrong. I think you might have some problems that need to be looked at. If you are really loosing 30% of your CPU's, then some big corrective action is warranted. If you contact me offline I can help you find out what the problem is at your site.


you can easily see in the LSPR numbers that as the number of processors increase ... the relative thruput multiplier (as a function of a single processor) declines (i.e. the LSPR multiplier divided by the number of processors).

the trivial corollary then is that the incremental increase as a percentage of a single processor ... is also declining (and has to decline faster than the overall decline ... or otherwise the simple arithmetic showing decline in overall thruput as a function of the number of processors ... wouldn't be happening).

ref:
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons

take some of the LSPR ratios listed in this reference
http://www.redbooks.ibm.com/abstracts/sg246645.html

and do some trivial arithmetic ... divide the overall thruput multiplier by the number of processors and see that the percentage declines as the number of processors increase. in order for that percentage to decline the incremental thruput increase contributed by each additional processor has to be decreasing.

i.e. from previous post
https://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons

from some trivial examples in
http://www.redbooks.ibm.com/abstracts/sg246645.html

in the PDF version of the reference document ... see table 5-1 LSPR table Z990 on (PDF) page 177 ... or document page 164.

going from one processor to two processors has thruput factor going from 1.0 to 1.9 (of a single processor) ... actually low of 1.8 to a high of 1.95.

if you divide the two-processor 1.9 thruput factor by the number of processors, you get .95. however, the incremantal performance is .9 (going from one processors to two processors) so dividing .9 by 1 ... gives that the incremental addition of the 2nd processor is .9.

by the time you are at 32 processors the thruput factor is around 20 (for different workloads it ranges between 14.17times to 24.18times). if you divide the thruput factor 20 by the number of processors ... you get approximately .60. if you calculate the difference in thruput increase between 31 processors and 32 processors ... you find that the amount of thruput increase of adding the one additional processor (going from 31 processors to 32 processors) is equivalent to less than 40percent of a single processor.

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 12 Jun 2006 09:03:01 -0600
Del Cecchi writes:
It's funny that the same factors that affected racetrack and other low end 370 machines didn't seem to hurt AS400 and S/36 (before it got shot).

so a big factor in as400 and s/36 was low skill level to install and maintain. one of the issues with 4341 vis-a-vis vax ... was that while 4341 had somewhat hardware price/performance advantage ... it seem to require higher skill/people resources (trading off hardware costs vis-a-vis skill/people costs). the skill issue was somewhat less of an issue for large corporations doing multi-hundred 4341 installs.

so a lot of that 4341/vax trade-off (hardware versis people cost) started moving to significantly cheaper hardware in the mid-80s which frequently also had lower skill requirements (workstations and large PCs).

for the market segment that as400 and s/36 was in ... the people/skill trade-off appeared to still represent a dominant issue. however in the late 80s ... we had some talks with the guy running gsd application software (as400, s/38, s36) ... my wife had worked with him when they had both been in the g'burg jes group (long ago and far away). he indicated that he was seeing some amount of s36 application software running on PCs in various ways.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Supercomputers Cost

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Supercomputers Cost
Newsgroups: comp.arch
Date: Mon, 12 Jun 2006 09:04:52 -0600
Del Cecchi writes:
Don't the TPC reports have cost in them?

possibly only if you can mulitple the cost/TPC by the number of TPCs

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe Linux Mythbusting (Was: Using Java in batch on

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting (Was: Using Java in batch on
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 Jun 2006 09:35:05 -0600
gilmap@ibm-main.lst wrote:
So, then, IBM elected not to build it, and "they" stayed away.

And the Wheelers attribute grave architectural paralysis to political infighting within IBM. Pity. If it were a usable alternative to TCP/IP, and more reliable, it would be valuable.


re:
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#46 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

in hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

i had one effort to ship a peer-to-peer networking operation that had ability to emulate 37xx (hardware/software) around the edges. it had originally been implemented on S/1 boxes by one of the RBOCs. i was going to move it to ha/cmp rs/6000 configurations.

the trivial use of real (peer-to-peer) network provided the ability to do a large number of things for sna configurations that were impossible in real sna operation.

dominant issue was that it would have severely impacted the communication groups 37xx revenue.

i accidentally got invited to give a presentation on the project at an SNA ARB (architecture review board) meeting in raleigh. afterwards, the person running ARB was very agitated and wanted to know how it was that I had been invited ... I think he wanted to make sure it never happened again. part of the problem was that it seemed that most of the people (or at least most of the technical people) in the audience wanted to stop work on what they were doing and come work on what I was doing. That, of course, wasn't allowed to last long.

old thread discussing some amount of that effort:
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

the post
https://www.garlic.com/~lynn/99.html#67 System/1

includes part of the presentation that I gave at the ARB meeting. it gives both a functional and a business cost analysis for comparable configurations (based on series/1 hardware ... not rs/6000 hardware).

disclaimer: as an undergraduate ... there was some things that I wanted to do with 2702 telecommunication controller ... that it turned out couldn't be done. somewhat as a result, the university started a project that reversed engineered the mainframe channel interface and built a channel interface board for an interdata/3 ... which was programmed to emulate a telecommunication controller. there was some write-up blaming four of us for spawning the clone controller business
https://www.garlic.com/~lynn/submain.html#360pcm

the new math: old battle of the sexes was: PDP-1

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the new math: old battle of the sexes was: PDP-1
Newsgroups: alt.folklore.computers
Date: Mon, 12 Jun 2006 10:08:34 -0600
Al Balmer writes:
It's probably unstated because it isn't true. Children learn their mother tongue from their mother (and other family members.) When children show up for their first kindergarten class, they already know how to talk. In fact, one of the problems in teaching ESL is that the students are immersed in their mother tongue when they are not in school.

There is some concern among parents of such children that the history and culture will be forgotten, and where there are sufficient numbers, and the parents actually care, private schools exist. This is different than learning language.


for some topic drift ... i had done some online computer conferencing stuff in the late 70s and early 80s ... and got blamed for something called tandem memos.

somewhat as a result, there was a researcher that was hired that sat in the back of my office for nine months, went with me to meetings, etc ... taking notes on how i communicated. they also got copies of all my incoming and outgoing email and logs of all instant messaging. a version of the final research report was also published as a stanford phd thesis ... joint between language and computer ai depts.

the researcher turned out to have been a professional ESL teacher for several years. at one point they made some statement about my having many of the characteristics of a non-native english speaker ... but w/o any other native spoken language.

misc. past related posts (computer mediated conversation)
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe Linux Mythbusting

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 Jun 2006 11:43:37 -0600
kaplow_r@encompasserve.org.mars (Bob Kaplow) writes:
Isn't that eactly waht the internet was about 2 decades ago when an email address looked like decwrl!ihnp4!nodea!nodeb!nodec!mynode!myname

the bangs were uucp email routing ... that at some point might also transverse part of the internet (i would still see uucp bang email a decade ago).

the big transition for arpanet was moving to internetworking protocol on 1/1/83. prior to internetworking ... you could have everything under a single administrative domain (even have bbn publishing network wide maint. schedules on the arpanet imp nodes). with the transition to internetworking and multiple environments ... DNS somewhat came into its own ... in part because you were having to deal with multiple administrative/policy domains.

random uucp bang references from quick search engine use
http://www.exim.org/pipermail/exim-users/Week-of-Mon-20000124/016334.html
https://en.wikipedia.org/wiki/UUCP
http://www.faqs.org/docs/linux_network/x-087-2-exim.simple.html
http://www.tldp.org/LDP/nag/node215.html
http://www.answers.com/topic/uucp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 12 Jun 2006 12:17:04 -0600
Ted MacNEIL wrote:
NOT in this case! The packets are dropped! They are not re-sent and the app is blown off the air. It works under SNA; it bellies up under TCP/IP. Every time! Repeatable!

there were some amount of dirty tricks ... not all that can be repeated in polite company.

with respect to the previous post about running sna thru a real (peer-to-peer) networking infrastructure
https://www.garlic.com/~lynn/2006l.html#50 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

and eliminating the communication groups whole 37xx business .... the obvious reaction was to get corporate to make sure all of my funding was cut.

the couldn't actually kill the project because it came out of another organization and most of the work was going to be subcontracted to the original (RBOC) implementers (just no money).

so showing a little ingenuity ... we went to one of the largest SNA customers. they had a huge annual budget devoted to operational and infrastructure things to compensate for SNA shortcomings. we showed that the fully funded costs for development, ship, and support of this other thing ... plus the hardware costs replacing all the 37xx boxes ... was less than their first year savings on all the add-on stuff that they could now eliminate (i.e. the customer would fund the total product development and product ship costs ... because they would easily recover that within the first year of operation).

getting that part stopped required other measures.

so there was an early mainframe tcp/ip implementation. in the late 80s, it would get about 44kbyte/sec aggregate thruput and consume just about a whole 3090 processor. i added rfc 1044 support to the implementation and in some tuning tests at cray research between a 4341 (clone) and a cray machine was able to show sustained effective thruput of approx. 1mbyte/sec using only a modest amount of the 4341 processor (limited to the controller hardware interface to the 4341 channel); i.e. about 25 times the thruput for maybe 1/20th the pathlength.

was somehow able to sneak out rfc 1044 support in the product when they weren't paying attention. misc. past posts mentioning rfc 1044 support:
https://www.garlic.com/~lynn/subnetwork.html#1044

the communication division finally came out and said that even tho sna and OSI were strategic (govs. and large institutions and organizations were all publicly claiming that all that internetworking stuff was going to be eliminated and replaced by osi)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

... that they were doing a tcp/ip implementation support in vtam. there was an organization in palo alto sq (corner of page mill and el camino) that was sometimes referred to as communication "west". they got the job of subcontracting the vtam implementation to a contractor.

the first/original (vtam) implementation had tcp/ip thruput significantly higher than lu6.2. it was then explained to the contractor that all protocol analysis have shown that lu6.2 has higher thruput than tcp/ip ... and therefor any tcp/ip implementation that benchmarked tcp/ip with substantially higher thruput than lu6.2 was incorrect ... and the company wasn't going to pay for an incorrect tcp/ip implementation. so what did the contractor do?

in that time-frame there was some analysis of NFS implementation running on top of typical tcp/ip ... common bsd tahoe/reno workstation implementation. there was a range of implementations from a low of 5k instruction pathlength (and five buffer copies) to something like 40k instruction pathlength ... to do a typical NFS operation.

in some detailed comparisons, it was claimed that somewhat equivalent mainframe function (not NFS, but the closest that SAA had to NFS capability) implemented on top of LU6.2 required 160k instruction pathlength and 15 buffer copies. Also, at the time, one of the issues in doing an 8kbyte buffer copy (involved in a NFS equivalent operation) was that 15 8k buffer copies could cost more processor cycles than executing the 160k instructions.

Memory Mapped I/O Vs I/O Mapped I/O

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Memory Mapped I/O Vs I/O Mapped I/O
Newsgroups: comp.arch
Date: Mon, 12 Jun 2006 12:49:30 -0600
Terje Mathisen wrote:
Andy, it really isn't recent: Afair it must be at least five years old.

or maybe 15-20 years ago, however, back then the obvious questions trying to get homework done for them ... seemed to cluster around sept. of the new school year when the crop of new college students got their first taste of online networking (and the snide answers seem to the contribute to the activity rapidly dropping off and then not reappearing again until the new school year the following sept).

now the questions that appear to be trying to get their homework done for them ... seem to be relatively randomly distributed all year ... with no particular obvious clustering.

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 12 Jun 2006 13:09:09 -0600
Del Cecchi writes:
It's funny that the same factors that affected racetrack and other low end 370 machines didn't seem to hurt AS400 and S/36 (before it got shot).

re:
https://www.garlic.com/~lynn/2006l.html#48 virtual memory

note besides the people skill issue ... there was also other aspects of the target market segment. a significant percentage of the explosive vm/4341 growth in late 70s and early 80s, was personal computing time-sharing. very little of the as400 and s/36 market directly involved personal computing type operations ... so there was much less direct impact of PC personal computing on as400 and s/36 customer market segment than on the vm/4341 personal computing time-sharing installs.

misc. posts about cp67 and vm370 personal computing time-sharing over the years
https://www.garlic.com/~lynn/submain.html#timeshare

but as i mentioned, later you did start to see porting of s/36 application software to pc platforms (in part because s/36 got shot).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DEC's Hudson fab

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC's Hudson fab
Newsgroups: alt.folklore.computers
Date: Mon, 12 Jun 2006 13:45:46 -0600
KR Williams writes:
*A* specific? How about *clean*! ;-)

Other than that, there are a ton of chemicals that are rather toxic and otherwise nasty, used. Safety precautions and mechanical quality/inspection ranks right up there with civilian nukes, AIUI. Size of the facility is an issue too. Some older fabs can't reasonably be converted to 300mm (I don't know the size of Hudson's fab) because they're physically not big enough for the tooling. It would cost more to refit them than to build from scratch. Of course fabs drink a lot of water and eat gobs of electricity too. These have to be in abundance, and cheap.


a brother-in-law was local executive/manager for the "clean room" construction for the AMD fab ??? (i forget the number now, i probably have it somewhere) in austin ... and one of my sons worked for him during the summer. it is somewhat easier building a fab class 10 clean room from scratch ... but still not easy. first just finishing the inside of the outer shell, special clean-room clothes for all the construction workers, lots of special construction processes (none of this cutting lumber with circular saw) and things getting constant acid washes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DEC's Hudson fab

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC's Hudson fab
Newsgroups: alt.folklore.computers
Date: Tue, 13 Jun 2006 09:00:35 -0600
Howard S Shubs writes:
And FAB6 was sub-Class 1. At least, it was supposed to be. Humans did NOT go in there w/o wearing a kind of special suit which covered EVERYTHING and provided air for breathing. I was told the suit cost

"bunny" suits ...
http://www.intel.com/education/cleanroom/index2.htm

a semi-custom chip was done for an aads chip strawman
https://www.garlic.com/~lynn/x959.html#aads

in a fab in dresden and had to put on a bunny suit for the walk thru.

a few comments regarding aads chip strawman vis-a-vis recent chip-and-pin
https://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#1 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#2 UK Banks Expected To Move To DDA EMV Cards
https://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
https://www.garlic.com/~lynn/aadsm24.htm#7 Naked Payments IV - let's all go naked

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DEC's Hudson fab

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC's Hudson fab
Newsgroups: alt.folklore.computers
Date: Tue, 13 Jun 2006 09:21:55 -0600
Anne & Lynn Wheeler writes:
a brother-in-law was foreman for the "clean room" construction for the AMD fab ??? (i forget the number now, i probably have it somewhere) in austin ... and one of my sons worked for him during the summer. it is somewhat easier building a fab class 10 clean room from scratch ... but still not easy. first just finishing the inside of the outer shell, special clean-room clothes for all the construction workers, lots of special construction processes (none of this cutting lumber with cicular saw) and things getting constant acid washes.

fab25
http://www.amd.com/us-en/Corporate/AboutAMD/0,,51_52_502_12841,00.html

it was class 1 clean room
http://www.amd.com/us-en/Corporate/AboutAMD/0,,51_52_9999_507,00.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Supercomputers Cost

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Supercomputers Cost
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 13 Jun 2006 15:18:07 -0600
Thomas Womack writes:
Yes, they do, but to the best of my knowledge nobody's run TPC-C on a Blue Gene system, or a Cray X1E, or Columbia, or the Earth Simulator yet.

here is a list of cost of various computers starting in 1944 thru 2003 ... including some supercomputers:
http://www.jcmit.com/cpu-performance.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DEC's Hudson fab

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC's Hudson fab
Newsgroups: alt.folklore.computers
Date: Wed, 14 Jun 2006 09:53:09 -0600
Roland Hutchinson writes:
Not quite the same. Texas as a continued to exist, although reduced in status from a nation-state to a component state of the USA. Same deal with the short-lived California Republic, such as it was, which survives only as a slogan on the state flag.

it is my understanding that Texas still has something like that as an option ... in whatever agreement between texas and the united states, incorporating texas as a state, texas retained the right to exercise an option where it could decide to reorganize into five separate jurisdictions/states.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DEC's Hudson fab

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC's Hudson fab
Newsgroups: alt.folklore.computers
Date: Thu, 15 Jun 2006 09:09:07 -0600
"Peter \"Firefly\" Lund" writes:
How did Japanese car companies manage to build quality cars in the UK and US? ;)

they apparently had a lot of trials and tribulations ... lots of problems, early on there were articles in the US that talked about them being accustomed to people with high school diplomas having a high school education. they were finding that they had to require junior college diplomas in order to have some reasonable expectation of getting people with a high school education.

Large Computer Rescue

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Large Computer Rescue
Newsgroups: alt.folklore.computers,comp.sys.dec,rec.games.video.arcade.collecting
Date: Thu, 15 Jun 2006 09:15:36 -0600
koehler@eisner.nospam.encompasserve.org (Bob Koehler) writes:
I don't know about RGVAC (whatever that is), but I used to have a sheet full of the 029 codes. I think I lost it, I still have my IBM "green card", but I don't see it there.

(For youngsters, an IBM green card contains a quick reference to the IBM 360 reference data, including a complete EBCDIC table, instruction opcodes, instruction formats, assembler directives, program status word fields, and peripheral access codes).


I've got several green cards and some number of yellow cards. I've also got a quick&dirty conversion of gcard ios3270 to html up
https://www.garlic.com/~lynn/gcard.html

gcard ios3270 was attempt to emulate a lot of the green card with online 3270 screens. it doesn't quite have everything ... it doesn't have the punch card hole equivalences for bcd and ebcdic

DEC's Hudson fab

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DEC's Hudson fab
Newsgroups: alt.folklore.computers
Date: Thu, 15 Jun 2006 10:48:52 -0600
Anne & Lynn Wheeler writes:
they apparently had a lot of trials and tribulations ... lots of problems, early on there were articles in the US that talked about them being accustomed to people with high school diplomas having a high school education. they were finding that they had to require junior college diplomas in order to have some reasonable expectation of getting people with a high school education.

re:
https://www.garlic.com/~lynn/2006l.html#61 DEC's Hudson fab

at the time, the articles also talked about coming from a society with a 98percent literacy rate that it took a bit of adjustment to adapt to a society with a 68percent literacy rate.

i've posted before about stuff from a little later in the early 90s, article quoting census study that half of the high-school graduate age people were functionally illiterate; half of the technical graduates from cal. univ/colleges were foreign born, major mid-western univ. commenting that between 1970 and 1990 they had to "dumb" down tech books for entering students.

recently there was some article about techincal higher education programs having lots of vacant seats (possibly a lot of the former, foreign born, students are finding institutions of higher eduction closer to home). in any case, one institution to try and fill the empty seats, went out and recuited a whole bunch of "special" students ... but it didn't work out; they all flunked out within 2-3 months.

misc. past posts:
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2002k.html#41 How will current AI/robot stories play when AIs are real?
https://www.garlic.com/~lynn/2002k.html#45 How will current AI/robot stories play when AIs are real?
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2003i.html#28 Offshore IT
https://www.garlic.com/~lynn/2003i.html#45 Offshore IT
https://www.garlic.com/~lynn/2003i.html#55 Offshore IT
https://www.garlic.com/~lynn/2003p.html#33 [IBM-MAIN] NY Times editorial on white collar jobs going
https://www.garlic.com/~lynn/2004b.html#2 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#38 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#42 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004d.html#18 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004h.html#18 Low Bar for High School Students Threatens Tech Sector
https://www.garlic.com/~lynn/2004j.html#26 Losing colonies
https://www.garlic.com/~lynn/2005e.html#48 Mozilla v Firefox
https://www.garlic.com/~lynn/2005g.html#5 Where should the type information be?
https://www.garlic.com/~lynn/2005g.html#43 Academic priorities
https://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor

Large Computer Rescue

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Large Computer Rescue
Newsgroups: alt.folklore.computers,comp.sys.dec,rec.games.video.arcade.collecting
Date: Thu, 15 Jun 2006 11:06:32 -0600
long ago as a student programmer, i had a summer job to port 1401 MPIO program to 360/30. basically MPIO acted as front-end card->tape and tape->printer/punch for the univerisity 709. they could run 360/30 in 1401 hardware emulation mode and run original MPIO ... so maybe it was just a make-work job for student programmer.

i got to design and implement my own supervisor, task manager, device handlers, storage manager. it was eventaully about 2000 source assembler cards ... and my default mechanism was to assemble it under os/360 and then reboot the machine with stand-alone loader.

to fix a bug in source and re-assemble ... was approx. an hour elapsed time ... rebooting os360, and then re-assembling my source program (which took half hour elapsed time). so i got relatively good at patching the "binary" card output of the assembler. I hadn't discovered "REP" cards ... so I would find the appropriate card ... and "multi-punch" a patch using a 026 keypunch (i.e. duplicate the card up to the patch, multi-punch the patch on the new card and then finish duplicating the rest of the card). after a while i got so i could read the key-punch holes as easily as i could read and interpret hex (I could fan the card deck looking for the TXT card with the relative program address of the location needing patching ... i.e. translating the punch holes in the card address field into hex).

one representation that still sticks solidly in my mind is 12-2-9 punch holes for hex '02'. convention was that assembler and compiler binary executable output cards had 12-2-9 in column one, followed by executable control card "type" (i.e. ESD, TXT, RLD, END, etc).

misc. past posts mentioning 12-2-9 and/or various 12-2-9 card formats
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)
https://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
https://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early computer games
https://www.garlic.com/~lynn/2004f.html#11 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2006.html#46 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006c.html#17 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2006g.html#58 REP cards

Why no double wide compare and swap on Sparc?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why no double wide compare and swap on Sparc?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 15 Jun 2006 11:27:47 -0600
Andy Glew writes:
Yeah, I was thinking of correcting my older post to try to get to this.

It is not fundamentally LL/SC vs. CAS and other atomic RMWs.

It is more that LL/SC was originally proposed and implemented via an "optimistic concurrency" type of address "link", whereas CAS and other atomic RMWs are traditionally implemented via a "lock" - whether bus lock, cache lock, or address lock.

If other people try to access the lock while the RMW is in flight they are stalled or told to go away, come back later. This guarantees forward progress.

Whereas if other people access the link between the LL and SC, it is the SC that fails. In the presence of an adversary doing ordinary stores, the LL/SC might never complete.

But these are just implementation artifacts. E.g. you can implement CAS or an atomic RMW with LL/SC, whether in user code or microcode - and such an RMW would have the same forward progress problems as LL/SC. You can implement a hybrid - try the optimistic concurreny approach first, and then try the lock approach.

Similarly, you can implement LL/SC by acquiring a lock, guaranteeing that the SC will finish. But then you need some way of backing out, protecting yourself against a malicious user who does the LL but never the SC. E.g. the Intel 860 released such a lock after 32 instructions.


the traditional locking on 360 multiprocessors had been test-and-set.

charlie was doing a lot of multiprocessor fine-grain locking work for cp/67 at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

when he invented compare&swap (came up with compare&swap to match charlie's initials "CAS").

a large part of the invention of compare&swap was because he noted that much of the fine-grain test&test locking was purely for some simple storage updates ... aka pointers and counters ... and that compare&swap could accomplish both the "locking" of the earlier test-and-set ... but the compare&swap semantics could also be used to directly accomplish various storage updates w/o having to acquire independent locking operations.

early attempts to deal with the pok 370 architecture group trying to get compare&swap into 370 architecture wasn't successful. they claimed that the "mainstream" (pok) operating system group didn't see sufficient additional smp benefit of compare&swap over the earlier simple test-and-set for locking. they said that in order to get compare&swap justified for 370 architecture a non-smp lock use had to be used.

as a result, the multi-threaded application use for various storage location updates was invented (independent of running on a single processor or a multiprocessor). in the traditional smp kernel use, the instruction stream is disabled for interrupts so a lock, load, modify, sotre, unlock sequence ... so there isn't a lot of deadlock problems. in multi-thread applications, the lock, update, unlock sequence could be interrupted with another thread getting dispatched and then deadlocking (which then requires all sorts of logic to avoid). the atomic compare&swap operation significantly reduced the various deadlock scenarios and software complexity for dealing with them.

the original rios/power/rs6000 didn't have support for shared-memory multiprocessing as well as no compare&swap instruction. the problem was that in the intervening years (between early 70s and early 90s) a number of multi-threaded applications (like large database infrastructures) had adopted use of compare&swap instruction. in order to simplify support, aix defined a software compare&swap macro ... which interrupted into the kernel and had a special fast-path for emulating compare&swap semantics (while disabled for interrupts, aka approx. atomic compare&swap in a single processor environment).

misc. past posts on smp, compare&swap, etc
https://www.garlic.com/~lynn/subtopic.html#smp




previous, next, index - home