List of Archived Posts

2011 Newsgroup Postings (01/22 - 02/11)

America's Defense Meltdown
America's Defense Meltdown
History of copy on write
Rare Apple I computer sells for $216,000 in London
Rare Apple I computer sells for $216,000 in London
Mainframe upgrade done with wire cutters?
Mainframe upgrade done with wire cutters?
Mainframe upgrade done with wire cutters?
Mainframe upgrade done with wire cutters?
Rare Apple I computer sells for $216,000 in London
Rare Apple I computer sells for $216,000 in London
Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
Testing hardware RESERVE
Rare Apple I computer sells for $216,000 in London
Long-running jobs, PDS, and DISP=SHR
History of copy on write
WikiLeaks' Wall Street Bombshell
Rare Apple I computer sells for $216,000 in London
Melinda Varian's history page move
A brief history of CMS/XA, part 1
A brief history of CMS/XA, part 1
New-home sales in 2010 fall to lowest in 47 years
What do you think about fraud prevention in the governments?
A brief history of CMS/XA, part 1
IBM S/360 Green Card high quality scan
Melinda Varian's history page move
Rare Apple I computer sells for $216,000 in London
The Zippo Lighter theory of the financial crisis (or, who do we want to blame?)
Mainframe upgrade done with wire cutters?
A brief history of CMS/XA, part 1
Colossal Cave Adventure in PL/I
Colossal Cave Adventure in PL/I
Colossal Cave Adventure in PL/I
A brief history of CMS/XA, part 1
Colossal Cave Adventure in PL/I
Colossal Cave Adventure in PL/I
Internal Fraud and Dollar Losses
1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
Colossal Cave Adventure in PL/I
1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
Colossal Cave Adventure in PL/I
Colossal Cave Adventure in PL/I
Productivity And Bubbles
Productivity And Bubbles
Colossal Cave Adventure in PL/I
Productivity And Bubbles
zLinux OR Linux on zEnterprise Blade Extension???
A brief history of CMS/XA, part 1
Speed of Old Hard Disks
vm/370 3081
Speed of Old Hard Disks
Speed of Old Hard Disks
A brief history of CMS/XA, part 1
Productivity And Bubbles
Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
Speed of Old Hard Disks
Productivity And Bubbles
If IBM Hadn't Bet the Company
Other early NSFNET backbone
Productivity And Bubbles
A Two Way Non-repudiation Contract Exchange Scheme
VM13025 ... zombie/hung users
vm/370 3081
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
Boeing Plant 2 ... End of an Era
If IBM Hadn't Bet the Company
vm/370 3081
Boeing Plant 2 ... End of an Era
vm/370 3081
If IBM Hadn't Bet the Company
IBM Future System
Custom programmability for 3270 emulators
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
NASA proves once again that, for it, the impossible is not even difficult
The first personal computer (PC)
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
If IBM Hadn't Bet the Company
The first personal computer (PC)
If IBM Hadn't Bet the Company
The first personal computer (PC)
NASA proves once again that, for it, the impossible is not even difficult
If IBM Hadn't Bet the Company

America's Defense Meltdown

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 22 Jan, 2011
Subject: America's Defense Meltdown
Blog: Boyd Strategy
and article I've referenced before ... beltway bandits and gov. contracts discovering that it is more profitable to succeed than to fail Success of Failure
http://www.govexec.com/management/management-matters/2007/04/the-success-of-failure/24107/

aka the gov. considers it so important ... that it will keep the money flowing ... and a stream of failures just means more money (than simple success upfront). The reference to "wall street venality dwarfs pentagon" equivalent ... is too big to fail ... put all the bets on very high risk with enormous odds/payoffs ... but in case of big losses, the gov. is there with its never ending flow of money.

re:
https://www.garlic.com/~lynn/2008s.html#20 Five great technological revolutions
https://www.garlic.com/~lynn/2009e.html#53 Are the "brightest minds in finance" finally onto something?
https://www.garlic.com/~lynn/2009o.html#25 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#41 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2010b.html#19 STEM crisis
https://www.garlic.com/~lynn/2010p.html#78 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2010q.html#5 Off-topic? When governments ask computers for an answer
https://www.garlic.com/~lynn/2010q.html#69 No command, and control
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#75 America's Defense Meltdown
https://www.garlic.com/~lynn/2011.html#93 America's Defense Meltdown

--
virtualization experience starting Jan1968, online at home since Mar1970

History of copy on write

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of copy on write
Newsgroups: comp.arch
Date: Sat, 22 Jan 2011 14:58:46 -0500
nmm1 writes:
Yes. Its specification was and is byte-by-byte. That is one of the reasons that it was very slow.

re:
https://www.garlic.com/~lynn/2011.html#96 History of copy on write
https://www.garlic.com/~lynn/2011.html#97 History of copy on write
https://www.garlic.com/~lynn/2011.html#98 History of copy on write

360 ... all the storage operands would be pre-checked ... both starting and ending (for at least store & fetch & 360/67 for page fault) ... before starting the instruction ... and abort the instruction w/o doing anything. One of the byte-by-byte features was to propagate value thru a field using overlapping operands, place a zero in 1st byte of source operand, and then do a move with the start of the target operand at +1 (the 2nd byte of the source is the same as first byte of the target, the zero from the 2nd byte of the source isn't there until it had been moved there). some more recent machines will attempt to optimize with larger chunks if not overlapping operands.

370 ... introduced the "long" instructions ... which were defined to be incrementally executed a byte at a time, interruptable, and restartable. some of the early implementations would precheck the long instructions operand addresses ... and abort w/o executing (instead of incrementally executing until the problem address happened) ... which sometimes could be unpredictable results.

somewhere recently somebody "APAR'ed" the TR/TRT instructions ... which got it changed. The TR/TRT takes each byte of the first operand, indexes displacement into the 2nd operand and either replaces the 1st operand with the contents of the indexed byte (or stops because of condition). The default was assume that the 2nd operand was always a 256byte table and precheck both the starting and ending storage locations (for things like fetch protect). The APAR was that the 1st operand/source might contain a limited set of values and programmer takes advantage of the fact to build a table much less than 256 bytes (always checking 2nd operand +256 could give erroneous results).

newer implementations now check to see if the (table) 2nd operand +256 bytes crosses 4k boundary (page fault, fetch protect, etc) ... if it doesn't, the instruction is executed as before. If +256 crosses 4k boundary, the instruction is pre-executed to see if any (source) byte value results in address crossing a 4k boundary. New "performance" recommendation is to never place starting address of 2nd operand/table within 256 bytes of 4k boundary.

one of the digressions with regard to segment versus page protect. In the original (370 virtual memory) segment protect design ... there was a single page table (per virtual segment) ... with the possibility that some virtual address spaces (sharing the segment) had r/o, store protect and other virtual address spaces (sharing the same segment) had r/w access.

with segment protect, all virtual address spaces could utilize the same physical page table ... with the protection specification back in each virtual address space specific segment table entry (i.e. extra bit in pointer to page table). with page protect ... the protect indicator is in the (shared) page table entry (rather than in the non-shared segment table entry) ... resulting in all virtual address spaces sharing that (segment/) page table ... will have the same protection.

semi-related recent discussion about virtual memory protection mechanisms versus (360) storage key protection mechanims:
https://www.garlic.com/~lynn/2011.html#44 CKD DASD
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#79 Speed of Old Hard Disks - adcons

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers, aus.electronics, aus.computers
Date: Sat, 22 Jan 2011 16:57:42 -0500
Ahem A Rivet's Shot <steveo@eircom.net> writes:
Network File Systems? First done by Novell with Netware.

I think DEC FAL was a bit earlier (ie. earlier than PCs).


there was also datahub ... done by san jose ... but a lot of the implementation was being done under subcontract with group in provo ... there was somebody commuting from san jose to provo nearly every week. when san jose decided not to follow thru with datahub ... they let the group in provo retain all the work they had done.

https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
https://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
https://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#36 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006l.html#39 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2007f.html#17 Is computer history taught now?
https://www.garlic.com/~lynn/2007j.html#49 How difficult would it be for a SYSPROG ?
https://www.garlic.com/~lynn/2007n.html#21 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
https://www.garlic.com/~lynn/2007n.html#86 The Unexpected Fact about the First Computer Programmer
https://www.garlic.com/~lynn/2007p.html#35 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007v.html#53 folklore indeed
https://www.garlic.com/~lynn/2008e.html#8 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2008p.html#36 Making tea
https://www.garlic.com/~lynn/2008r.html#68 New machine code
https://www.garlic.com/~lynn/2009e.html#58 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2010.html#15 Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers, aus.electronics, aus.computers
Date: Sat, 22 Jan 2011 21:24:01 -0500
"SG1" <lostitall@the.races> writes:
As for an operating system (OS) for the new computers, since Microsoft had never written an operating system before, Gates had suggested that IBM investigate an OS called CP/M (Control Program for Microcomputers), written by Gary Kildall of Digital Research. Kindall had his Ph.D. in computers and had written the most successful operating system of the time, selling over 600,000 copies of CP/M, his OS set the standard at that time.

re:
https://www.garlic.com/~lynn/2011b.html#3 Rare Apple I computer sells for $216,000 in London

https://en.wikipedia.org/wiki/Gary_Kildall

gone 404 ... but lives on the wayback machine
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html

kildall using cp67/cms at npg school (wiki mentions he fulfilled draft obligation by teaching at npg)

melinda's virtual machine history going back to science center, cp40 & cp67.
https://www.leeandmelindavarian.com/Melinda#VMHist

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jan 2011 10:01:04 -0500
hancock4 writes:
IBM obviously "lost money" in giving out free software; I don't think they charged even for distribution tapes or documentation even in the 1970s after unbundling; if it was a legacy free item, you got the package for free. (And IIRC, some unbundled fee products were still quite cheap, esp as compared to today's software prices.)

Anyway, the free software was IBM's 'loss leader' to build the utility value of its computers. IBM unbundled this partly in response to anti- trust pressures, says Watson in his autobio.


bundling back then is somewhat like flat-rate internet & cellphone packages ... they immensively simplified things for the customer ... although machines were leased prior to unbundling ... processors had "meter" (like home utilities). customers had standard 1st shift monthly charge and additional for use about straight 1st shift. in that sense, a lot of the bundling were similar to programs ... packaged deals for "leased" equipment. not long after unbundling, much of the install base was converted from lease to sales (some unflattering comments that motivation was outgoing executive got big bonus because of the revenue spike, but it reduced future ongoing revenue). misc past posts mentioning 23jun69 unbundling announcement
https://www.garlic.com/~lynn/submain.html#unbundle

besides software, it wasn't unusual for there to be a "team" of SEs ("system engineers") assigned at bigger customers; nearly always onsite at the customer to provide customer with whatever assistance was needed for using the computer. With unbundling, these SEs services also became "charged for". One of the big issues was lots of SE education had been as sort of journeyman/trainee part of these SE "teams" onsite at customer installations. With unbundling, nobody was able to figure out what to do with "trainee" SEs (since if they were doing anything at customer site, it had to be a billable line item). The "HONE" systems were several internal (virtual machine) CP67 datacenters, initially for providing "Hands-On" online access for branch office SEs ... being able to practice their operating system skills. misc. past posts mentioning HONE:
https://www.garlic.com/~lynn/subtopic.html#hone

one of the big (leased/metering) issues for offering online, 24x7 timesharing service was programming tricks to minimize meter running when there was no activity (default was that meter would still run if system was active/available ... even if it wasn't executing). early on, off-shift use was extremely sporadic ... but it wasn't likely to incerase ... unless the system was available, on-demand, 7x24 ... but recoverable charges for the light, sporadic use wasn't sufficient to recover hardware billing charges (having the meter only run when there was actual use went a long way to being able to deploy 7x24 online offering). some past posts about early 7x24 online commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare

the company was able to make the case with the gov. that "kernel" software was still free (necessary for the hardware to operate).

the company then had the failed Future System effort ... some past posts
https://www.garlic.com/~lynn/submain.html#futuresys

During Future System (which was going to completely replace 360/370 and be radically different), 370 efforts that were considered possibly competitive were killed off. Then with the demise of FS, there was mad rush to get items back into the 370 hardware & software pipelines. The lack of 370 products is also considered reason that clone processors were able to get market foothold. With getting new 370 items back into the product pipelines and the clone processor competition, there was decision made to transition to charging for kernel software.

I had been doing 360/370 stuff all during the FS period (and making uncomplimentary comments about the FS activity). some old email (one of my hobbies was providing packaged production enhanced operating systems for internal datacenters):
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

With the demise of FS and mad rush to get out 370 products, various pieces that I had been doing were selected to ship to customers. Some of the items selected were to be packaged as (kernel add-on) "Resource Manager" product. My "Resource Manager" then got selected to be guinea pig for starting to charge for kernel software. Misc. past posts mentioning scheduling and resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare

for a few years there was "base" (free) operating system (kernel) offerings with optional "charged-for" kernel software (that was growing in size) ... until the cut-over was made to charge for all kernel software (and kernel product packaging collapsed back to single offering). About this time there was transition to "object code only" ... even tho software had started being charge-for with 23jun69 announcement, source was still available (for some products shipped with full source with source change/maintenance being done standard feature). "object code only" eliminated source availability (as standard option).

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jan 2011 11:52:17 -0500
re:
https://www.garlic.com/~lynn/2011b.html#5 Mainframe upgrade done with wire cutters?

a side-effect of the gov. & unbundling ... was there were rules that prices charged had to be a profit. basically there was original development costs (upfront) plus production and support costs. some amount of the pre-unbundling development costs were grandfathered ... so development costs were primarily an issue with new products after unbundling.

some parts of the company weren't use to such accounting and had rather lavish processing. another characteristic was that some number of products were assumed to be price sensitive ... so there was "forecast" done for "low", "middle", and "high" price. basically


(price) * (related forecast) >
(development costs) + (production & support costs)

assumption was that "low" price would have larger forecast than "high" price ... and "low" price might even result in larger total revenue/profit.

however, some "new" products (done with much more lavish circumstances) found that there was no price that would result in enough customers to have a profit. however, there was some latitude in interpreting the rules.

in the mid-70s, there was a new "networking" product for the favorite son operating system for which there was no price-forecast that resulted in profit. however, there had been a vm370 "networking" product that the company was refusing to release (kill vm370 strategy) ... which had been developed with compareably "no resources" ... and had large forecast (if it were allowed to be announced) at numerous price points. The way forward for the favorite son operating system network was to have a "combined" product announcement ... allowing combining the development costs and combining the forecasts (for both products) ... which resulted in a way for the "mainstream" product to be announced (the combined product financials show a profit).

in the early 80s, the interpretation of the rules seemed to get even more relaxed. It was then sufficient to have totally different products in the same development organization ... the total revenue from all the products covered the total costs of developing & supporting all the products (in one case a 2-3 person product which had the same revenue as a couple hundred person product, allowing revenue from a "non-strategic" product to underwrite a product considered "strategic").

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jan 2011 13:41:37 -0500
"Phxbrd" <lesliesethhammond@yahoo.com> writes:
While on straight commission of 17%, I frequently made more on programming and card-cutting when I sold for Friden/Singer from 1963 to 1973. Most of my sales were to 3rd party leasors, who paid me another 2%. I started out as a programmer trainee, then became salesman, analyst, designer, programmer and installer but soon had a full time systems secretary and a programmer under me. I loved nothing better than whipping IBM. In fact, it was an interview with IBM that sent me to Friden where I was hired on one interview.

re:
https://www.garlic.com/~lynn/2011b.html#5 Mainframe upgrade done with wire cutters?
https://www.garlic.com/~lynn/2011b.html#6 Mainframe upgrade done with wire cutters?

after having spent a number of years redoing pieces of os/360 and rewriting a lot of cp/67 as undergraduate ... I went to standard interview job fair and was given the corporate programmer aptitude test ... which I apparently didn't pass ... but the science center hired me anyway. misc. past posts mentioning the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

the person doing the job interviews (at the univ) was from the san jose plant site ... and he couldn't understand why I was being given a job offer ... wasn't even entry position ... started me out straight out of school at higher level ... of course, besides doing all the stuff at the univ ... I had also been called in to help setup some of the early BCS operation ... recent reference:
https://www.garlic.com/~lynn/2010q.html#59 Boeing Plant 2 ... End of an Era

somebody made a comment about the above post ... that when they went to work for lockheed ... they were told that the "real" winner of the C5A competition was the 747 (aka having lost, boeing turned it into a much more successful/profitable commercial airplane).

also as mentioned in the Boeing ref above ... for quite some time, I thot that the renton datacenter was the largest around ... but recently one of the Boyd (I had sponsored Boyd's briefings at IBM) biographies mentioned in 1970, he had done a stint running "spook base" which was a $2.5B "windfall" for IBM (approx. ten times the value of mainframes in the renton datacenter) ... inflation adjustment is about factor of seven ... or approx. $17.5B in today's dollars.

Marketing people on Boeing account ... claims it was a salesman on the Boeing account that motivated the corporation's change from straight commission to "quotas". Supposedly the day 360 was 1st announced, Boeing walked in to the salesman's office with large 360 order (and knew significantly more about what 360 was, than the salesman). The commission was much larger than the CEO's compensation ... resulting in the company creating the sales "quota" plan for the following year. The following year, Boeing walked in with another large order ... resulting in the salesman exceeding 100% of quota before the end of Jan. The salesman then left and formed a large computer services company.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jan 2011 17:33:00 -0500
greymausg writes:
If I remember correctly,one ccould buy an early 386 with either working FPU or mot, exactly same chip.

re:
https://www.garlic.com/~lynn/2011b.html#5 Mainframe upgrade done with wire cutters?
https://www.garlic.com/~lynn/2011b.html#6 Mainframe upgrade done with wire cutters?
https://www.garlic.com/~lynn/2011b.html#7 Mainframe upgrade done with wire cutters?

there was period when 386dx (w/fpu) had line cut & packaged with 16bit external bus (instead of 32bit) ... and sold cheaper as 386sx (w/o fpu) ... sort of like 8088 version of 8086.

that summer overseas builders had built up a big inventory of 286 machines for xmas seasonal sales. the 386sx just blew them (the 286 machines) out of the water and the 286 machines were sold at deep discount that fall (effectively able to drop 386sx into 286 motherboard/machine and get higher performance)
https://en.wikipedia.org/wiki/Intel_80386

the 370 equivalent was that the incremental cost of producing a 370/158 dropped so low (comparable to some old figures regarding incremental manufacturing costs for producing high volume autos) ... that it became possible to sell 3031 for the same or less than 370/158.

the 370/158 had microcode engine that was shared between the 370 cpu function and the integrated channel function. for 303x, they created a "channel director" ... which was a 158 microcode engine w/o the cpu microcode and just the integrated channel function ... coupled with a 3031 processor ... which was 158 microcode engine w/o the integrated channel microcode and just the cpu microcode (in theory, a "single processor" 3031 actually had two 158 engines rather than just one).

old posts with some 1990 prices discussion about 286, 386, 486 (after the 88 xmas sesson where market droped out of the 286 market)
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#81 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#82 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers, aus.electronics, aus.computers
Date: Sun, 23 Jan 2011 18:14:42 -0500
Seebs <usenet-nospam@seebs.net> writes:
Basically, Microsoft single-handedly invented the botnet and the email virus. Actually, I'm not quite sure that's fair. Technically, the GOOD TIMES jokers *invented* the email virus, as an abstract concept, but Microsoft was by far the first company to actually implement the necessary infrastructure.

there was xmas exec on bitnet in nov87 ... vmshare archive
http://vm.marist.edu/~vmshare/browse.cgi?fn=CHRISTMA&ft=PROB
old risk digest
http://catless.ncl.ac.uk/Risks/5.81.html#subj1

almost exactly a year before morris worm (nov88)
https://en.wikipedia.org/wiki/Morris_worm

the xmas exec is basically social engineering ... distributing a compromised executable and getting people to load & execute.

this is slightly different from convention for automatic execution. that grew up with various office applications that evolved on local, private, safe, closed business networks. this infrastructure was then transferred to the wild anarchy of the internet w/o the necessary safety and countermeasures (aka just reading an email could result in automatic execution)

bitnet (along with EARN in europe) was higher education network (significantly underwritten by IBM and using similar technology that was used for the corporate internal network) ... past posts mentioning bitnet &/or earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

some old email by person charged with setting up EARN:
https://www.garlic.com/~lynn/2001h.html#email840320

the internal network was larger than the arpanet/internet from just about the beginning until possibly late '85 or early '86. misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

I was blamed for online computer conferencing on the internal network in the late 70s and early 80s. The folklore is that when the executive committee was told about online computer conferencing (and the internal network), 5of6 wanted to fire me.

Later, somewhat as a result, a research was paid to study how I communicated ... got copies of all my incoming & outgoing email, logs of all my instant messages, sat in the back of my office for nine months taking notes face-to-face and phone conversations (sometimes went with me to meetings). This also turned into stanford phd thesis and material for some number of papers and books. misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers, aus.electronics, aus.computers
Date: Sun, 23 Jan 2011 23:06:59 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
there was xmas exec on bitnet in nov87 ... vmshare archive
http://vm.marist.edu/~vmshare/browse.cgi?fn=CHRISTMA&ft=PROB
old risk digest
http://catless.ncl.ac.uk/Risks/5.81.html#subj1

the xmas exec is basically social engineering ... distributing a compromised executable and getting people to load & execute.


re:
https://www.garlic.com/~lynn/2011b.html#9 Rare Apple I computer sells for $216,000 in London

bitnet annoucement on vmshare
http://vm.marist.edu/~vmshare/browse.cgi?fn=BITNET&ft=MEMO

tymshare made its vm370/cms online computer conferencing available to SHARE user group organization in aug76
http://vm.marist.edu/~vmshare/

recent post about the internal network ...
https://www.garlic.com/~lynn/2011.html#4

including old email about plans to convert the internal network to sna/vtam
https://www.garlic.com/~lynn/2011.html#email870306

also references old email about the executive committee being told that PROFS was an SNA application (among other things) used to justify converting the internal network to sna/vtam:
https://www.garlic.com/~lynn/2006x.html#email870302
in this old post
https://www.garlic.com/~lynn/2006x.html#7

and somewhat similar discussion here ... where somebody forwarded me a lengthy log of email discussing how sna/vtam could be the nsfnet backbone
https://www.garlic.com/~lynn/2006w.html#email870109

in this old post
https://www.garlic.com/~lynn/2006w.html#21

some of the same people involved in the above referenced email exchanges (about sna/vtam for nsfnet backbone) ... were later involved in the transfer of cluster scale-up ... mentioned in this old post about jan92 meeting in ellison's conference room:
https://www.garlic.com/~lynn/95.html#13

also referenced in this other email
https://www.garlic.com/~lynn/lhwemail.html#medusa

--
virtualization experience starting Jan1968, online at home since Mar1970

Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 25 Jan, 2011
Subject: Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
Blog: LinkedIn
lots of RFID work has been done for EPC ... basically next generation barcode for inventory applications. Various efforts have used the technology to encode static data ... like what is on payment card magstripe ... and use them in contactless environments. Other is iso14443 which has gotten a lot of play in transit turnstyle applications. In the late 90s, some from the transit industry asked if a x9.59 financial transaction could be done within the power & elapsed time turnstlye requirements, using chip less expensive than transit but much more secure than "secure" fiancial contact chips.

card security code was countermeasure to white card account number guessing attack (generate magstripe info from formula) ... hash the rest of the information encode with bin/bank secret.
https://en.wikipedia.org/wiki/Card_security_code

however, at least the early 90s, skimming attacks copied valid mastripe for creating counterfeit. x9.59 standard was from the x9a10 financial standard working group which had been given the requirement to preserve the integrity of the financial infrastructure for ALL retail payments

nacha did pilot close to x9.59. old ref from wayback machine:
https://web.archive.org/web/20070706004855/http://internetcouncil.nacha.org/News/news.html
earlier response to the RFI
https://www.garlic.com/~lynn/nacharfi.htm

there were a number of "safe" payment products being pushed at the start of the century that got high marks from large merchants ... until they were told that there would effectively be a surcharge on top of highest fee they were already paying (severe cognitive dissonance after decades of indoctrination that much of interchange fee was proportional to fraud).

US institutions have had 40% (for some large instiutitons 60%) of their bottom line coming from payments, as compared to less than 10% for Euroepean institutions. As a result US institutions have been a lot more resistant to any kind of disruptive change (merchants had expected a significant drop in interchange fees for the "safe" products).

in addition to card security wiki ... there is magstripe wiki (which has had major rewrite recently, since the old version references Los Gatos lab where I had several offices and labs):
https://en.wikipedia.org/wiki/Magnetic_stripe_card

The magstripe wiki references the IBMer that invented magstripe ... but the old verson also mentioned that magstripe standard was run out of los gatos lab for first decade or so ... also gatos lab did some of the early ATM (cash) machines

In addition to x9.59 standard, I did chip design ... that could be used contactless and (in combination with x9.59) could be evesdropped on live transactions and crooks couldn't still use information for fraudulent transactions (also countermeasure to crooks using info from data breaches for fraudulent transactions). I gave presentation at Intel Developer Forum in the trusted computing track ... mentioning the chip was at least as secure as TPM chip and 1/10th (or less) the cost.
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing hardware RESERVE

Refed: **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Testing hardware RESERVE
Newsgroups: bit.listserv.ibm-main
Date: 25 Jan 2011 08:19:25 -0800
PaulGBoulder@AIM.COM (Paul Gilmartin) writes:
Long ago, circa MVS 3.8 without GRS, in our little lab we got sporadic deadlocks when one job allocated SYSLIB on VOL001, SYSLMOD on VOL002, and another allocated SYSLIB on VOL002, SYSLMOD on VOL001.

long ago and far away ... discussion of the ACP RPQ for 3830 ... allowing for fine granualarity locks (more like VAX/VMS) in lieu of reserve/release
https://www.garlic.com/~lynn/2008i.html#email800325
in this post
https://www.garlic.com/~lynn/2008i.html#39 American Airlines

above references System/R which was the original relational/SQL done in bldg. 28
https://www.garlic.com/~lynn/submain.html#systemr

Another approach ... is a CKD channel program with compare&swap semantics that was developed for HONE in the late 70s (US operation was possibly largest single-system-image, loosely-coupled operation in the world at the time) ... was more efficient than RESERVE/RELEASE (but not as efficient as ACP RPQ) ... since it involved additional rotation. At one-time there was extensive discussions with the JES2 multi-spool group doing something similar. Misc. past posts mentioning internal HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

Later I needed an inverse of RESERVE for large cluster operation ... in recovery operation needing to remove a specific processor from the configuration ... I wanted a FCS switch operation to allow everybody ... but the processor that has been removed from the configuration (there is a failure mode where a processor stops, appears to have failed, and then later resumes ... potentially just before doing some I/O operation with global impact, aka it doesn't realize that it has been removed from the configuration).

One of the problems was that FCS was being quite distracted with the complex effort to layer FICON on top of it (somewhat in the manner that ALL current day CKD is done by simulation on top of underlying FBA).

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers, aus.electronics, aus.computers
Date: Tue, 25 Jan 2011 13:52:35 -0500
Morten Reistad <first@last.name> writes:
There have been ages since I couldn't read a document. It still happens all the time that it formats strangely. But that happens on other MS installations as well; where the setup for the details is different from what the original author intended.

re:
https://www.garlic.com/~lynn/2011b.html#4 Rare Apple I computer sells for $216,000 in Londn

regarding Melinda's pages with some mainframe historic documents moving ... there was some comment that princeton was removing her pages because of possible hate crimes issues ... over her comments about MVS.
https://www.leeandmelindavarian.com/Melinda#VMHist

she had a multi-file postscript version that was many tens of megabytes (with lots of pictures) that I converted to PDF (4mbytes) and did an awz/kindle conversion. frequently kindle conversion becomes smaller file ... but with all the images, the kindle version is twice as large (9mbytes, as pdf).

other of the PDF files with figures that are line-drawings using characters didn't convert nearly as well (and converted to smaller files) for kindle ... with the characters in the drawings being "flowed".

--
virtualization experience starting Jan1968, online at home since Mar1970

Long-running jobs, PDS, and DISP=SHR

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Long-running jobs, PDS, and DISP=SHR
Newsgroups: bit.listserv.ibm-main
Date: 25 Jan 2011 11:02:30 -0800
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
No. None of his scenarios involved concurrent updates. The danger comes from scenarios other than the ones he presented. Of course, the ABEND S213 will prevent corruption of the directory in that case.

for a long time, CKD disks had a corrupted data vulnerability involving loss of power ... particular nasty when updating VTOC &/or PDF directories (which started to disappear with CKD being emulated on FBA ... since FBA tended to have countermeasures for the problems, in part because of predictable block sizes).

The issue was if power was lost in the middle of write operation ... the channel/controller could continue to transfer data ... basically filling in with zeros (data was no longer coming from processor memory). The disk write could complete ... with the propagated zeros and correct error correcting (ECC) information written (based on the propagated zeros).

On restoration of power ... there would be no disk error reading the record ... just that record would be all zeros from the point power was lost.

--
virtualization experience starting Jan1968, online at home since Mar1970

History of copy on write

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of copy on write
Newsgroups: comp.arch
Date: Tue, 25 Jan 2011 16:53:19 -0500
timcaffrey@aol.com (Tim McCaffrey) writes:
Funny, other mainframe OSs figured out those problems in the same time period.

Let's face it, Unix was supposed to be Multics-lite. It was supposed to do multitasking/multiuser on a budget (a very small budget). Security (both from the HS sense and the sandbox sense) wasn't a high priority, in other words they didn't try to make it absolute since their attitude (if they knew it or not) was that they controlled the environment and they "just had to be careful". Example: I remember reading that there was NO recovery code in the disk drivers, since they just felt if the disk was going bad, buy a new one.

BTW, you can program in assembly in a secure OS, and even on secure machines. Suppose there was an OS that took full advantage of segments on the 386: You can use pointers, but you can't use buffer overflow, execute data or otherwise touch things your not supposed to. Still, assembly language is valid in such an OS (yes, I wish there was one, it would be much more reliable).


some number of the CTSS people went to 5th flr of 545 tech sq for Multics and others went to the science center on the 4th flr and did things like virtual machines (cp40, cp67, vm370). misc. refs
https://www.garlic.com/~lynn/subtopic.html#545tech

multics was done w/pli ... and had none of the buffer overflow problems of unix. old post
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation

with references to
http://www.acsac.org/2002/papers/classic-multics.pdf

and
http://csrc.nist.gov/publications/history/karg74.pdf

and a reference to virtual machine work done on the 4th flr:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

part of the unix issue was that string/array conventions in C makes it almost as hard to *NOT* shoot yourself in the foot ... as it is in many other environments to actually shoot yourself in the foot (conventions in many other environments result in having to work really hard to have buffer overflows ... even for some assembler environments where there are specific kinds of coding conventions).

--
virtualization experience starting Jan1968, online at home since Mar1970

WikiLeaks' Wall Street Bombshell

From: lynn@garlic.com (Lynn Wheeler)
Date: 25 Jan, 2011
Subject: WikiLeaks' Wall Street Bombshell
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2010p.html#43 WikiLeaks' Wall Street Bombshell
https://www.garlic.com/~lynn/2010p.html#48 WikiLeaks' Wall Street Bombshell
https://www.garlic.com/~lynn/2010q.html#17 WikiLeaks' Wall Street Bombshell
https://www.garlic.com/~lynn/2010q.html#23 WikiLeaks' Wall Street Bombshell
https://www.garlic.com/~lynn/2010q.html#27 WikiLeaks' Wall Street Bombshell

BofA update: Bank of America braces itself for fallout from WikiLeaks disclosures report says
http://www.computerworld.com/s/article/9203180/Bank_of_America_braces_itself_for_fallout_from_WikiLeaks_disclosures_report_says

and more general wikileak item

Ralph Nader: Wikileaks and the First Amendment
http://www.counterpunch.org/nader12212010.html

all the articles seem to draw the analogy with the pentagon papers ... somebody obtains the information (potentially anonymously) and provides it for publication. The Nadar article raises the question why all the obfuscation and misdirection that has been going on regarding wikileaks role in any publication.

wikileak related ... not particularly financial ... but gov. ... more along the lines of the old pentagon papers:

A Clear Danger to Free Speech
http://www.nytimes.com/2011/01/04/opinion/04stone.html?_r=1

latest, article also mentions BofA:

Assange vows to drop 'insurance' files on Rupert Murdoch
http://www.theregister.co.uk/2011/01/12/wikileaks_insurance_files/

related to attacks on wikileaks founder

US Wikileaks investigators can't link Assange to Manning
http://www.theregister.co.uk/2011/01/25/assange_cant_be_tied_to_manning_says_report

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers, aus.electronics, aus.computers
Date: Wed, 26 Jan 2011 13:27:15 -0500
Roland Hutchinson <my.spamtrap@verizon.net> writes:
It's Microsoft's latest success at making darkness the new standard.

As governments started to wise up and require that documents be preserved in documented file formats, Microsoft decided that rather than embracing the already established international standard for office documents, they would subvert the ISO standards process by packing various committees with new representatives from various nations whom they had coerced or bought off and ram through a "fast-track" approval of their own newly- devised proprietary format as a standard.

Needless to say, the proposed standard was of Byzantine complexity, unnecessarily long (6000 pages), and yet not long enough, since it was full of "documentation" that amounted to things like "Do this the way Excel 97 does" without further elaboration, all of which made it impossible for anyone else to implement.

Even the bought-and-paid-for committees couldn't quite bring themselves approve it as it stood over a sea of (inadequately heard) objections from third parties, so some revisions were required. Result: a "standard" that nobody supports, not even Microsoft, but with a "transitional" version that (what a coincidence!) matches Microsoft's current Office formats.

The whole episode has left the ISO itself in a very bad light indeed, with calls for revising the procedures that let this happen.

But don't take my word for it. Here you go:

https://en.wikipedia.org/wiki/Standardization_of_Office_Open_XML


slightly similar but different tale about ISO requiring that work on networking standards had to conform to OSI model. I was involved in taking HSP (high-speed networking) protocol to x3s3.3 (US iso chartered committee for networking standards). It was rejected because:

1) it went directly from transport/level four to LAN/MAC ... bypassing network/level three ... violating OSI model

2) it supporting "internetworking" ... a non-existent layer in the OSI model (approx. between transport/networking)

3) it went directly to LAN/MAC interface ... a non-existent interface in the OSI model (sitting approx. in the middle of layer 3 networking).

one of the other differences between ISO and IETF that has been periodically highlighted is that IETF (aka internet standards) requires that interoperable (different) implementations be demonstrated before progressing in the standards process. ISO can pass standards for things that have never been implemented (and potentially are impossible to implement).

misc. past posts mentioning HSP, ISO, OSI, etc
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

note that fed. gov. in the late 80s was mandating that internet be eliminated and replaced with ISO (aka GOSIP).

--
virtualization experience starting Jan1968, online at home since Mar1970

Melinda Varian's history page move

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: Melinda Varian's history page move
Blog: IBM Historic Computing
Melinda Varian's history page move
https://www.leeandmelindavarian.com/Melinda#VMHist

Some of info about 70s & 80s was somewhat garbled since it was only based on information available externally.

For instance in the wake of FS failure and the mad rush to get products back into the 370 product pipeline, the favorite son operating system did managed to convince the corporation to kill vm370, shutdown the burlington development group, and move all the people to POK to support MVS/XA development (or otherwise it wouldn't meet its "ship" schedule). There was a "VMTOOL" done as part of MVS/XA development ... but it was never intended to be shipped to customers (eventually coming out as a "migration aid".

Endicott did manage to save the vm370 product mission, but it had to reconstitute a development group from scratch.

The corporation was planning on delaying the announcement of the burlington closing until the last possible minute ... to minimize the potential that members might find alternatives to moving to POK. When the information was leaked early ... there was a serious hunt to find the source. During those last weeks the halls in burlington was very furtive people ... nobody wanted to be seen talking to anybody else (or otherwise being suspected as the source of the leak). There was a minor joke that head of POK was a significant contributor to (DEC) VAX/VMS ... because of some number of burlington people that managed to leak away to DEC.

Other minor note was some claim that the 3rd or 4th internal CMSBACK release being the first (before morphing into workstation datasave, ADSM and current TSM). Reference to original internal CMSBACK release was done in the late 70s ... which then went through a number of internal releases
https://www.garlic.com/~lynn/lhwemail.html#cmsback

note: VMTOOL (internal only virtual machine for supporting mvs/xa development) is different from VMTOOLS (internal VM related online conferencing using TOOLSRUN).

VMTOOL was suppose to be an internal-only mvs/xa development tool ... it was never suppose to be released to customers. originally virtual machine product was to be killed off ... until endicott saved the product mission and endicott had to recreate a product group from scratch. The effects of endicott having to recreate a product group from scratch can sort of be implied from Melinda's history comments about the extremely poor product quality (during endicotts startup phase).

Melinda's history also mentions that VM/XA Migration Aid was based on the VM Tool built for use by the MVS/XA developers Even though the vm tool was never intended to be released to customers ... there was eventually a case made that MVS customers needed it as an aid for migrating from MVS to MVS/XA. ... small extract:
Although the button IBM was handing out said that the Migration Aid was more than just a migration aid, it was not very much more. Release 1 was intended to be used to run MVS/SP in production while running MVS/XA in test. All virtual machines were uniprocessors, for example. VM/SP guests were also supported, and RSCS and PVM would run, but CMS was supported only for maintaining the system. However, the group in Poughkeepsie was working hard to produce a real VM/XA system.

... snip ...

also see the preceding paragraph to above (this info was just what was available externally).

trivia ... who leaked to burlington that the corporation had decided to kill off vm370, shutdown the burlington group, and move all the people to POK as part of supporting MVS/XA (POK had made the case that they wouldn't be able to meet the MVS/XA ship schedule w/o those resources).

I would contend that one of the reasons that CMS/XA development was being done in YKT was exactly because the there had been no plan for a vm/cms product (and weren't staffed or organized for such a product).

.. some core of the burlington resources moved to POK were preserved for the vmtool work ... but none of the other skills/resources were kept for what they had been doing in burlington (product support, cms, etc). That was why the vmtool release as the (mvs to mvs/xa) migration aid .... had nothing else done for xa-mode ... just what had been done for the internal MVS/XA development.

Also see the SIE discussion which points to old email discussing the difference between (original) 3081 SIE and 3090 SIE. The 3081 SIE had never been intended for high-performance production work with high frequency invokation. In 3081, there was limited microcode space ... so part of invoking the SIE instruction was "paged" microcode ... done by the service processor (uc.5) that was paging from 3310/FBA
https://www.garlic.com/~lynn/2011.html#62 SIE - CompArch

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
from long ago and far away

Date: 26 June 1984, 02:15:05 EDT
From: <*guess who*>

For Your Information... This is the SHARE requirement on CMS/XA that will probably be voted on in August.
<*guess who*>
------------
TITLE: CMS/XA Support for Compatibility and Memory Expansion

AUTHOR NAME: Charles Whitman
INSTALLATION: The Adesse Corporation

STATEMENT:
The limited address space of CMS restricts use of CMS in a growing variety of ways. Many CMS applications requiring both native CMS services and OS simulation have begun to run out of address space. In addition, as mixed MVS-VM installations convert to MVS/XA, the disparity in compilers and program products becomes more of a problem than it has been.

As a first step, future releases of VM/SP should introduce interfaces to allow programmers to avoid techniques that are incompatible with XA (eg., stealing PSWs and issuing SIOs). In addition, CMS/XA should provide the following:
• A compatibility mode for execution of existing applications in less than 16M virtual machines.
• Access to data and execution of code above the 16M boundary.
• A new CMS Application Program Interface to provide access to native CMS services for code executing above the 16M boundary.
• Extended CMS macros to avoid requiring CMS/XA programs to use MVS/XA macros. This is particularly true for the CMS file system where the OS access methods are an unacceptable alternative.
• Object-code compatibility with MVS/XA compilers such as CMS presently enjoys with the MVS/SP compilers.

JUSTIFY:
Installations are encountering the 16M limit for the following reasons:
• The complexity and size of applications software has increased dramatically in recent years.
• The number and size of DCSSs that must be available simultaneously has forced some installtions and software vendors to create overlay structures of DCSSs.
• The size of matrices processed is being limited by available address space. A new native CMS Application Program Interface is required for the following reasons:
• CMS commands and system code must run above the 16M boundary using something comparable to the present CMS interfaces.
• The simplicity and function of the existing CMS interface should be preserved to ensure continued productivity of CMS programmers.
• CMS performance depends largely on native CMS services which are generally more efficient than OS simulation, which are in turn more efficient than corresponding MVS services.
• Expansion of existing CMS applications requires access to CMS services from programs executing above the 16M boundary.
• Existing CMS installations can not afford to train their CMS programmers in OS services.
• Absence of a commitment to extend the native CMS interfaces in a CMS/XA effectively constrains installtions planning large CMS applications, limiting the growth of their VM systems. An MVS/XA Application Program Interface is required for the following reasons:
• Compatibility with MVS/XA compilers and software products permits development of MVS/XA applications under CMS, and permits execution of MVS/XA applications under CMS.
• Absence of a commitment to provide MVS/XA interface in a CMS/XA effectively constrains development of large applications designed to run under both MVS/XA and CMS. The consequences of not addressing this requirement include the following:
• A paradoxical 16M limit on CMS virtual storage for users executing programs on real machines with several times as much real storage.
• Constraint on software development due to inadequate VM/CMS support on high-end processors.
• Lost hardware sales to installations who turn to alternative system architectures that provide interactive environments with no 16M limit.
• Lost hardware sales to dedicated VM/CMS installations who turn to non-IBM plug-compatible hardware for high-end processors.


... snip ... top of post, old email index

and from a week earlier

Date: 06/18/84 11:51:10
From: wheeler

re: pam changes; fyi ... updates described in ciajoint to split the dchsect into two blocks are being picked up by the cms/xa group, pretty much as is ... and will support the aligning the data block on 4kedf page boundaries. Still negotiating over the issues of picking up full pam support.


... snip ... top of post, old email index

In the above, somebody had taken a bunch of internal stuff and included them as part of a proposed joint study with CIA.

PAM ... full paged mapped filesystem support ... I had originally done in cp67 and migrated to vm370. DCSS (primarily changes to CMS code to be included in shared segments) for vm370 release 3 ... was very small subset of PAM ... but reworked to work with DMKSNT. Recent references to DMKSNT:
https://www.garlic.com/~lynn/2011.html#21 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2011.html#74 shared code, was Speed of Old Hard Disks - adcons

lots of past mentions of page mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1

One of the issues regarding running out of 16mbyte virtual address space (mentioned in cms/xa requirements) is slightly analogous to MVS running out of application 16mbyte virtual address space (i.e. because of extensive pointer API, MVS kernel image occupied 8mbytes of every address space ... and "common segment" was threatening to take 6mbytes at large installations (concurrently running applications needed their own unique locations in the common segment for each subsystem).

The original PAM/shared-segment implementation allowed shared segments to appear at different locations. In the vm370 release 3 DCSS very small subset using DMKSNT ... each application "shared segment" was given a unique location in a hypothetical installation "global" address space (to avoid users having application address conflicts). Some installations then went to multiple hypothetical "global" address spaces ... multiple different copies of a shared system application fixed at different virtual addresses (hoping that a user could find some combination of the shared systems that they wanted, which wouldn't have virtual address space conflicts).

In the "dynamic" flavor, available contiguous virtual address would be chosen dynamically at load time for each cms user (possibility that same shared system appearing simultaneously at different virtual address for different cms users). Misc. past posts mentioning issues with dynamic virtual location
https://www.garlic.com/~lynn/submain.html#adcon

these recent posts mentioning the MVS common segment/CSA issue
https://www.garlic.com/~lynn/2011.html#45 CKD DASD
https://www.garlic.com/~lynn/2011.html#79 Speed of Old Hard Disks - adcons

--
virtualization experience starting Jan1968, online at home since Mar1970

New-home sales in 2010 fall to lowest in 47 years

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: New-home sales in 2010 fall to lowest in 47 years
Blog: Facebook
New-home sales in 2010 fall to lowest in 47 years
http://news.yahoo.com/s/ap/20110126/ap_on_bi_go_ec_fi/us_new_home_sales

from above:
Sales for all of 2010 totaled 321,000, a drop of 14.4 percent from the 375,000 homes sold in 2009, the Commerce Department said Wednesday. It was the fifth consecutive year that sales have declined after hitting record highs for the five previous years when the housing market was booming.

... snip ...

and the news spin on business tv: DEC2010 saw "surge" in new-home sales (fine print compared to nov2010, nov2010 was the lowest ever, dec2010 was 2nd lowest, but still a "surge")

it occurred to me that it might be a version of pump&dump ... helping with the hype to get dow to close above 12k. Cramer did interview a couple yrs ago claiming that it was widespread practice to take a position in the market and then flurry of slanted news (& while illegal, nobody was worried because SEC was too dumb to figure it out). recent refs to above:
https://www.garlic.com/~lynn/2010h.html#41 Profiling of fraudsters
https://www.garlic.com/~lynn/2010p.html#43 WikiLeaks' Wall Street Bombshell

there is slightly different view ... in the madoff hearings there was testimony by a person that had tried for a decade to get SEC to do something about Madoff. In the weeks around the hearings, the person wouldn't appear in public and/or give interviews. When pressed, a spokesperson made some reference to believing that the only reason that SEC didn't do anything for a decade about Madoff, was because it was heavily under influence of criminal organizations (capable of extreme violence, who might object to his efforts to bring down Madoff). a few recent refs to above:
https://www.garlic.com/~lynn/2010q.html#21 Ernst & Young called to account -- should Audit firms be investigated for their role in the crisis?
https://www.garlic.com/~lynn/2010q.html#40 Ernst & Young sued for fraud over Lehman
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?

--
virtualization experience starting Jan1968, online at home since Mar1970

What do you think about fraud prevention in the governments?

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: What do you think about fraud prevention in the governments?
Blog: Financial Crime Risk, Fraud and Security
Crisis Panel Report Pins Blame on Wall Street, Washington
http://www.bloomberg.com/news/2011-01-26/crisis-panel-report-pins-blame-on-wall-street-washington.html

the dissenting positions seem to ignore many of the items from this thread

including the part that rating agencies played in allowing unregulated loan originators to unload everything they wrote at triple-A ratings (so they no longer had to care about loan quality and/or borrower's qualifications)

... some archive from the thread:
https://www.garlic.com/~lynn/2011.html#46 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#48 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#49 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?
https://www.garlic.com/~lynn/2011.html#53 What do you think about fraud prevention in the governments?

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1

... for a little more fun, later in the day on 26jun84:

Date: 06/26/84 19:31:17
To: HONE branch office id (se on large customer account)
Cc: <*guess who*>
From: wheeler

hear anything more from pok/kingston??? I noticed <*guess who*> forwarded the vmshare cms/xa article (that i sent out) to pok/kingston. maybe that will hurry things along.


... snip ... top of post, old email index, HONE email

I had an ongoing process with Tymshare where they sent me regular tape dump of all the VMSHARE (and later added PCSHARE) online computer conferencing files ... which I then made available internally.

vmshare archive:
http://vm.marist.edu/~vmshare/

above reference's Melinda's (princeton) home page (which has yet to disappear).

misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM S/360 Green Card high quality scan

From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: IBM S/360 Green Card high quality scan
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010q.html#47 IBM S/360 Green Card high quality scan

an (another 26jun84) item regarding IPCS replacement in REXX

Date: 06/26/84 15:11:37
To: endicott organization
From: wheeler

re: dumprx;

dumprx is over three years old. ... unfortunately I've made several presentations on it at share & other groups over two years ago. I'm just getting messages that non-IBM company is coming out with something that is supposedly very similar to dumprx & implementation.

DUMPRX started as a project to demonstrate that system programming could be done in higher level language ... and specifically that some number of IBM system programming projects currently done in assembler would be appropriately done in a higher level procedural language that was interpreted ... i.e. demonstrate REXX feasibility.

Well, three years has come and gone ... it looks like somebody else is releasing it ... even tho IBM ain't interested.

One of the primary issue of DUMPRX was given the concept about how to implemented it ... several other things fell out. As a result my total effort todate has been under six weeks ... which should be comperable for anybody attempting to duplicate the effort. Anybody starting from DUMPRX concept and putting some real effort (more than six weeks) could/have turn out much more functional product.


... snip ... top of post, old email index

at the time DUMPRX was extensively used by internal datacenters and majority of VM PSRs (internal product support people). misc. past posts mentioning dumprx
https://www.garlic.com/~lynn/submain.html#dumprx

when rexx was still very young (not a product and still called "rex"), I wanted to demonstrate that it wasn't just another pretty scripting language. The objective was to implement a replacement for IPCS (written in assembler) in REXX working half-time over period of three months (and it would have ten times the function and be ten times faster).

The initial objective was achieved in a couple weeks ... so the rest of the time was spent doing "automated" debugging scripts that would perform most of the typical debugging operations a real person might do ... as well as scan of most storage looking for anomalies and specific failure signatures (being ten times faster than IPCS assembler was important here). DUMPRX session could be saved & restarted ... so human could resume an automated session where extensive analysis had already been performed.

--
virtualization experience starting Jan1968, online at home since Mar1970

Melinda Varian's history page move

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Jan 2011
Subject: Melinda Varian's history page move
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move

Note that CMS/XA is being started well after the start for MVS/XA (and shutdown of burlington development group) ... and not being available until 370/xa has been at customers for several years (another indication that originally there were no plans for VM/XA product)

Date: 10/26/82 13:45:07
To: wheeler

Lynn,
Just logged on this ID and found your messages and notes about development 'processes'. I have to agree with all of the comments I read. It is MOST frustrating to have to work in this environment - as I'm sure you appreciate.
Last week, *XXXXX* gave his 'state of the nation' address to the assembled VM group here in POK. I was dismayed at some of the things he said; for example - some VM work is being sub-contracted (sorry, 'vendored') to *iiiiiiiiiii*. Of course, the work they do is "simple", not the advanced type which we do "here", but it costs him 1 million dollars per year for this 'simple' work (and it is done by 8 programmers). Similarly, CMS/XA is being sub-contracted to Yorktown, but again, this is "simple work". He didn't mention the fact that a 6 or 8 man group is contemplating generating something like 60,000 lines of code by mid-1985, which works out at at least 500 LOC per person per month - 5 or 10 times what is optimistically expected here in POK CP development.
If only we weren't all bogged down in this mire of 'process' which management has decided is vital, and if, as you propose, a decent amount of time was allocated to education (the right kind, of course), I'm sure we wouldn't be in the position we are now.
It almost seems we are getting back to the stage immediately preceeding the Tandem memos again, but what can be done??????? I wish I knew!


... snip ... top of post, old email index

I was blamed for online computer conferencing on the internal network in the late 70s and early 80s ... part of which came to be referred to as Tandem Memos. There was article on the phenomena in nov81 datamation. The folklore is that when the executive committee was informed of online computer conferencing (and the internal network), 5of6 wanted to fire me.

Date: 3 October 1984, 14:45:53 EDT
From: <*guess who*>
To: large distribution

The Turbo CMS (CMS/XA) IPFS is now available for your reading pleasure. If you want a copy (if you are really going to read it -- it is 160 pages long) send me your mailing address & I will send you a copy. Those of you here in Yorktown can stop by 89-N15 & get a copy.

Please send all comments to me. Thanks, <*guess who*>


... snip ... top of post, old email index

other recent cms/xa reference:
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1

--
virtualization experience starting Jan1968, online at home since Mar1970

Rare Apple I computer sells for $216,000 in London

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Rare Apple I computer sells for $216,000 in London
Newsgroups: alt.folklore.computers
Date: Thu, 27 Jan 2011 22:13:33 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
When we wanted to "download" something we had to mail in a request for a tape and wait for it to come back the same way :-)

[older folks can substitute "card deck" for tape.]


i periodically comment about being asked to help do a HONE clone in Paris in the early 70s ... as part of EMEA hdqtrs moving from NY to Paris ... and difficulty finding a way to read email back in the states. misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

The Zippo Lighter theory of the financial crisis (or, who do we want to blame?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Jan, 2011
Subject: The Zippo Lighter theory of the financial crisis (or, who do we want to blame?)
Blog: Financial Cryptography
The Zippo Lighter theory of the financial crisis (or, who do we want to blame?)
http://financialcryptography.com/mt/archives/001305.html

slightly shorter version of my earlier comments ... in

What caused the financial crisis. (Laying bare the end of banking.)
http://financialcryptography.com/mt/archives/001268.html

turning loans/mortgages into toxic CDOs created huge new set of transactions for wallstreet being able to take possibly 15-20% in fees and commissions

reference to possibly $27T in triple-A rated toxic CDOs transactions done during the bubble (possibly $5.4T for wallstreet?)
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

there are reports that the industry tripled in size (as percent of GDP) during the bubble. also NY comptroller reported that wallstreet bonuses spiked more than 400% during the bubble.

mortgages had been packaged as CDOs during the S&L crisis (obfuscate the underlying value); but w/o the triple-A ratings had very little market.

this time unregulated loan orginators found that they could pay for triple-A ratings (even when both the originators and rating agencies knew they weren't worth triple-A ... from the congressional rating agency hearings, fall2008) ... which exploded the market for the toxic CDOs ... and eliminating any reason for the loan orginators to care about loan quality or borrowers qualifications. Since they could immediately unload everything at triple-A ... they just turned into loan/mortgage mills ... their revenue only limited by how many & how fast they could turn over the loans.

no-documentation, no-down, 1% interest-only payment ARMs found an exploding market among speculators; with real-estate inflation running 20-30% ... it met possibly 2000% ROI (flipping before the rates adjusted ... further churning transactions and boosting inflation). These mortgages become the equivalent of the '20s "Brokers' Loans" ... allowing the real-estate market to be turned into the equivalent of the '20s stock market.

Individual compensation on the loan originator and the wall street sides (of packaging loans as triple-A rated toxic CDOs) was so great that it overrode any possible concern as to the effects on the institution, the economy, and/or the country (with being able to buy triple-A ratings sitting in the middle).

The triple-A rating made the toxic CDOs appear acceptable to large number of institutions that wouldn't otherwise deal in the instruments. At the end of 2008, the four too-big-to-fail regulated, "safe" institutions were carrying $5.2T of the toxic CDOs "off-balance" (courtesy of repeal of Glass-Steagall and their "unregulated", risky investment banking arms). ref
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

GSE lost a major portion of the mortgage market share as mortgages were being packaged up as triple-A rated toxic CDOs and sold in these other markets.

Earlier in 2008, a number of toxic CDO sales ... totaling several tens of billions had gone for 22cents on the dollar. If the too-big-to-fail institutions had to bring their off-balance toxic CDOs back onto the balance sheet, they would have been declared insolvent and forced to be liquidated. TARP funds had supposedly been to buy up these toxic assets but they obviously didn't know the magnitude of the problem (the amount appropriated wouldn't have made a dent in the problem).

It took more than a year of legal efforts to get Federal Reserve to disclose what it has been doing; buried in the information is reference to them buying up these assets at 98cents on the dollar.

now this is archaic, long-winded post from 1999 discussing several of the problems ... including the fact that in 1989 CITI was nearly taken down by its ARM mortgage portfolio (it got out of the business and required a private bailout to stay in business)
https://www.garlic.com/~lynn/aepay3.htm#riskm

role forward and CITI is holding the largest share of the $5.2T in triple-A toxic CDOs (effectively mostly an ARM mortgage portfolio) and requires another bailout to stay in business. Repeal of Glass-Steagall didn't directly cause the problem, repeal of Glass-Steagall just allowed several of these too-big-to-fail institutions to enormously help fuel the (triple-A rated toxic CDO) loan/mortgage mill and side-step the regulations designed to keep them out of risky behavior.

related to the ZIPPO theme ... regulations had kept the hotspots of greed and corruption separate and damped down ... then the period of extremely lax regulation and de-regulation ... allowed the greed and corruption hotspots to merge and create a financial firestorm.

one of the suggestions has been to RICO wallstreet
https://en.wikipedia.org/wiki/Racketeer_Influenced_and_Corrupt_Organizations_Act

for three times the $27T involved in the triple-A rated toxic CDOs ($81T).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe upgrade done with wire cutters?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe upgrade done with wire cutters?
Newsgroups: alt.folklore.computers
Date: Fri, 28 Jan 2011 16:04:11 -0500
re:
https://www.garlic.com/~lynn/2011b.html#8 Mainframe upgrade done with wire cutters?
https://www.garlic.com/~lynn/2011b.html#15 History of copy on write

... faulty memory and not carefully rereading the referenced wiki article ... clearly says that 387 wasn't ready at the time of 386(dx) and so early motherboards had slots for 287.

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Jan 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move

the theoretical cutoff for this discussion group is 1985 ... a lot of the vm/xa and cms/xa stuff is 85 or later.

CMSSTRUCT MEMO from vmshare touches on some issues (from 28Feb82):
http://vm.marist.edu/~vmshare/browse.cgi?fn=CMSTRUCT&ft=MEMO

the referenced TSS CSECT & PSECT comes into play regarding my posts about shared segments being able to dynamically appear at multiple different locations simultaneously
https://www.garlic.com/~lynn/submain.html#adcon

The pascal and semi-privilege address space is somewhat related to rewrite I did of cp spool system in pascal and ran it in a virtual address space. recent post:
https://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2010k.html#35 Was VM ever used as an exokernel?

in the following, note the emphasis on *intelligent workstations*. one of the issues regarding arpanet/internet passing the internal network in size was ability for workstations to be network nodes ... while the corporate communication group was restricting PCs and workstations to terminal emulation. some past posts
https://www.garlic.com/~lynn/subnetwork.html#emulation

Date: 20 April 1983, 17:18:11 EST
From: <*guess who*>

About a dozen SEs met this past Monday and Tuesday to discuss CMS and CMS/XA requirements. Here is my summary of some of the important issues that came out of that meeting.

Summary of input from the CMS Specialists Focus Session

One big incompatible change is much better than many small changes. Substantial improvements in CMS are desired more than compatibility with what we have today. A competitive product that has incompatibilities can be sold more easily that a completely compatible product that is not competitive.

File sharing is the number one requirement. It is more important than SNA support. It is more important than XA support. Any "new" or "improved" CMS is not new or improved if it lacks file sharing. A new file system cannot be sold if it does not have file sharing. (At least, file sharing must be announced as "on the way" when the new file system is announced. Skilled marketing could hold the customers off for a few months.) Customers would accept a system that is up to * slower if it has file sharing. The level of integrity provided by

EDF today is considered adequate.

Having only one CMS is very important. This single CMS (at least the user and application programming interfaces) must run across the product line from intelligent workstations to 308x machines. Function may be subsetted on the workstation CMS, but the interface should be consistent.

A well defined interface is very important. It is essential if one is to provide single CMS user and programming interfaces across a range of machines. The interface described in the CMS Restructure proposal (opaque, machine like, well defined) was received with enthusiasm, and was described as being well worth the conversion effort.

Intelligent workstations are going to be as important as XA in the latter half of this decade, if not more so. Great growth in workstation usage is expected. No one seems really sure how they relate to and interface with the mainframes.

Multitasking and window systems are seen as important functions if CMS is to be a competitive system. They are especially important when workstations are considered. Flexibility is very important in a window system, as there will be many different workstations out there. OS ATTACH is important for applications running in service virtual machines. It is much less important for applications running in the CMS user's machine.

DOS simulation is little used. Most users compile DOS programs by submitting them to service machines running VSE as guest operating systems. No complaints if DOS simulation is done away with. Except that VSAM is widely used. OS VSAM is preferred, but it does not support FBA devices.

<*guess who*>


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Fri, 28 Jan 2011 22:08:36 -0500
Charles Richmond <frizzle@tx.rr.com> writes:
Colossal Cave Adventure, AKA Crowther & Woods Adventure, originally was a 350 point game written in F40 FORTRAN for the PDP-10. There is a port written in PL/I of that version of the game. ISTR that someone here (perhaps Lynn) mentioned it before in <a.f.c.>... I have a copy of the PL/I source.

So I am wondering... *who* wrote this port of Adventure??? Does anyone here know the story of the PL/I version's origin???


internally, i would make the fortran source available to anyone that showed they had completed the game. one of the people in stl that had completed the game (and got copy of the fortran source) did a pl/i port (or at least one of them).

The person that i believe did that port ... left a couple yrs later ... after having done an educational program for his home atari. when nobody at ibm was interested, he offered it directly to atari. atari then contacted ibm ... who unleashed the lawyers. he departed ibm within 24 hrs.

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2011 10:24:45 -0500
Peter Dassow <z80eu@arcor.de> writes:
That's really *no* secret. You can download several (I guess more than 10 variants) versions of Colossal Cave Adventure even with source code in Fortran or in C from
http://www.rickadams.org/adventure/e_downloads.html


re:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I

this was internal network 1978 ... i was getting a copy for vm370/cms from TYMSHARE. TYMSHARE appeared to have gotten copy from Stanford SAIL pdp10 ... for their pdp10 and then ported to vm370/cms and made executable available on their commercial online vm370/cms timesharing service (there is folklore that when TYMSHARE executives 1st heard that games were on their system, they insisted all games be removed ... until they were informed that games were accounting for 1/3rd of their online revenue).

TYMSHARE had also made their online computer conferencing available to SHARE as VMSHARE in aug76 ... VMSHARE archive here:
http://vm.marist.edu/~vmshare/

I was setting things up to get regular copy of the VMSHARE files to make them available internally ... so was periodically dropping by TYMSHARE offices for that reason ... and other reasons.

I finally got a copy and made executable version available on the internal network ... the internal network was larger than the arpanet/internet from just about the beginning until possibly late '85 or early '86. misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

TYMSHARE wiki reference:
https://en.wikipedia.org/wiki/Tymshare

random TYMSHARE trivia ... GNOSIS was a 370-based operating system developed by TYMSAHRE ... as part of M/D purchase in '84 ... i was brought in to do audit of GNOSIS
https://en.wikipedia.org/wiki/GNOSIS

as part of its spinoff to keylogic (for keykos)
https://en.wikipedia.org/wiki/KeyKOS

... i still have gnosis hardcopy someplace in boxes.

Doug was also working for TYMSHARE at the time ... and there was some concern what would happen in the M/D purchase ... so I set up some interviews trying to get him hired
https://en.wikipedia.org/wiki/Douglas_Engelbart

for other drift ... misc. past posts mentioning adams website:
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006y.html#19 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2007g.html#0 10 worst PCs
https://www.garlic.com/~lynn/2010q.html#70 VMSHARE Archives

note ... the tcp/ip is the technology basis for the modern internet, NSFNET backbone was the operational basis for the modern internet, and CIX was the business basis for the modern internet. in the 80s I was working with NSF and some of the expected locations on what was to become the NSFNET backbone ... an old email reference:

Date: 11/14/85 09:33:21
From: wheeler

re: cp internals class;

I'm not sure about 3 days solid ... and/or how useful it might be all at once ... but I might be able to do a couple of half days here and there when I'm in washington for other reasons. I'm there (Alexandria) next tues, weds, & some of thursday.

I expect ... when the NSF joint study for the super computer center network gets signed ... i'll be down there more.

BTW, I'm looking for a IBM 370 processor in the wash. DC area running VM where I might be able to get a couple of userids and install some hardware to connect to a satellite earth station & drive PVM & RSCS networking. It would connect into the internal IBM pilot ... and possibly also the NSF supercomputer pilot.


... snip ... top of post, old email index, NSFNET email

however, various internal politics prevented me being able to bid on the NSFNET backbone RFP. The director of NSF attempted to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) ... but that just aggravated the internal politics (there were comments about what we already had running was at least five years ahead of all NSFNET bid submissions to build something new). . misc. other old NSFNET email
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

recent post about various internal operations were claiming SNA/VTAM for NSFNET:
https://www.garlic.com/~lynn/2011b.html#10 Rare Apple I computer sells for $216,000 in London

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2011 10:37:44 -0500
re:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I

for other archaic nsfnet topic drift

Date: 09/30/85 17:27:27
From: wheeler

re: channel attach box; fyi;

I'm meeting with NSF on weds. to negotiate joint project which will install HSDT as backbone network to tie together all super-computer centers ... and probably some number of others as well. Discussions are pretty well along ... they have signed confidentiality agreements and such.

For one piece of it, I would like to be able to use the cambridge channel attach box.

I'll be up in Milford a week from weds. to present the details of the NSF project to ACIS management.


... snip ... top of post, old email index, NSFNET email

I had an internal HSDT (high-speed data transport) project ... had terrestrial links as well as TDMA earth stations with transponder on SBS4. I was involved in doing various kinds of things ... including numerous projects using NSC's HYPERchannel ... misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and little more HSDT drift
https://www.garlic.com/~lynn/2011.html#92 HELP: I need a Printer Terminal

Date: 11/16/85 14:54:06
From: wheeler

re: fireberd; fyi, i've got the beginnings of a turbo pascal programs that I hope will eventually be able to ship fireberd data to the host. Right now I have two programs, one that supports two ASYNCH ports and acts as an ascii terminal display (ultramux) with one asynch port and as the fireberd printer using COM2. I also have another Turbo program that interfaces to xxxxx's high level language interface for PC327x and will logon to VM and perform some other simple tasks. I hope to merge the two so that I can log both fireberd error data and ultramux output on a vm host.


... snip ... top of post, old email index

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 29 Jan 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move
https://www.garlic.com/~lynn/2011b.html#29 A brief history of CMS/XA, part 1

and for quite a bit CMS/XA topic drift:

misc. past posts mentioning HSDT (high-speed data transport) project and/or (NSC) HYPERchannel
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Date: 28 November 1983, 11:19:35 EST
To: wheeler

I am from IBM-xxxxxx, presently on assignment in Bob O'Hara's department working on CMS/XA. I spent my last 2 years in Endicott in the CMS development group. I was reading your paper "Historical Views of VM Performance" and in the bibliography found a reference to "VM/370 Hyperchannel Support I" which you have not published yet. Prior to my coming to the US, I was in the "field" working for the French Atomic Energy Commission (roughly equivalent to the SLAC) where I was in charge of VM and VTAM. They had a large heterogeneous network involving IBM and CDC computers and at that time (that was in 79-80) they got several NSC hyperchannels for which they developed support in JES2. I worked in putting JES2 under CMS and that's the way we've got VM into the network.

Would it be possible to get a copy of your paper ; is there any plan to include your work in the product ? I am interested in any information related to VM - NSC. Thank you.


... snip ... top of post, old email index

some of this came up in thread yesterday about ADVENTURE on vm370/cms in late 70s:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#32 Colossal Cave Adventure in PL/I

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Sun, 30 Jan 2011 09:50:26 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
I'm meeting with NSF on weds. to negotiate joint project which will install HSDT as backbone network to tie together all super-computer centers ... and probably some number of others as well. Discussions are pretty well along ... they have signed confidentiality agreements and such.

re:
https://www.garlic.com/~lynn/2011b.html#email850930

I was never actually allowed to sign such a contract and/or install HSDT ... later the network to tie together all the super-computer centers became the NSFNET backbone (which then evolved into the modern internet) ... after the original NSFNET backbone RFP. misc. past email related to "NSFNET"
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

The NSFNET Backbone RFP went out specifying "T1" links (in part because HSDT was already running T1 backbone). However, the winning response (for $11.2M) actually only installed 440kbit links. Then sort of to meet the letter of the RFP ... they installed T1 trunks with telco multiplexors running multiple 440kbit links. Sarcastically made comments that possibly they could claim a "T5" network ... since there was possibility that some of the T1 trunks were in turn multiplexed over T5 trunks at some point. For various & sundry reasons there is old folklore that actual resources that went into NSFNET backbone was three to four times the RFP $11.2M.

old email reference to director of NSF sending corporate letter (copying CEO) regarding HSDT (hoping to help with the internal politics, but it apparently just made it worse):
https://www.garlic.com/~lynn/2006s.html#email860417

old email regarding internal politics attempting to position sna/vtam for nsfnet backbone
https://www.garlic.com/~lynn/2006w.html#email870109

old email reference to NSFNET RFP finally being awarded
https://www.garlic.com/~lynn/2000e.html#email880104

... later possibly believing that they could blunt my sarcastic remarks, for the NSFNET backbone T1->T3 upgrade RFP response, I was asked to be the redteam (the blueteam was something like 2-3 dozen people from half dozen or so labs around the world). At the final review, I presented 1st, then a few minutes into the blueteam response, the executive running the review pounded on the table and said he would lie down in front of a garbage truck before he let any but the blueteam response go forward (apparently even executives could understand my response was vastly superior). I got up and left ... there were even a few others that walked out with me.

past posts in this thread:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#32 Colossal Cave Adventure in PL/I

above also posted to linkedin ietf discussion:
http://www.linkedin.com/groups/Today-IETF-Turns-25-Slashdot-83669.S.40470495?qid=d0b1ba86-6e72-48b0-9694-d382555fcdda
http://www.linkedin.com/blink?msgID=I78973238_20&redirect=leo%3A%2F%2Fplh%2Fhttp%253A*3*3lnkd%252Ein*3tqm3Pj%2F4ium&trk=plh

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Sun, 30 Jan 2011 16:36:14 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Maybe http://home.roadrunner.com/~pflass/PLI/code/advent.pli

Versions and Ports of Adventure known to exist
http://www.io.com/~ged/www/advelist.html

references:
VMCM0350 -- PL/I port of WOOD0350 for VM/CMS
http://risc.ua.edu/pub/games/colossal/colossal.zip
http://risc.ua.edu/pub/games/cms/colossal.vmarc


but the above server no longer exists

however, there is the wayback machine:
https://web.archive.org/web/*/http://risc.ua.edu/pub/games/cms/colossal.vmarc

but at the moment it is saying try again later.

note adventure wiki (and several other references that appear to repeat what is in the wiki) ... says adventure became available on ibm mainframes on vm/cms in late 1978 ustilizing pl/i version. tymshare had just taken fortran version from pdp10 to vm/cms (and apparently had gotten the pdp10 version from stanford just a couple miles away).

while waiting for copy from Tymshare ... spring '78 email requesting copy
https://www.garlic.com/~lynn/2008s.html#email780321

response a couple weeks later from UK (corporate site near/at univ. location)
https://www.garlic.com/~lynn/2006y.html#email780405

and my response shortly later same day
https://www.garlic.com/~lynn/2006y.html#email780405b

adventure related email a few days later
https://www.garlic.com/~lynn/2007m.html#email780414

and another email a month later mentioning pli version
https://www.garlic.com/~lynn/2007m.html#email780517

past posts in this thread:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#32 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#34 Colossal Cave Adventure in PL/I

past posts in longer thread from year ago.
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#64 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#65 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#67 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#68 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#75 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#77 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#82 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010e.html#4 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010e.html#9 Adventure - Or Colossal Cave Adventure

and a similar thread from four yrs ago:
https://www.garlic.com/~lynn/2007o.html#8 Original Colossal Cave Adventure
https://www.garlic.com/~lynn/2007o.html#11 Original Colossal Cave Adventure
https://www.garlic.com/~lynn/2007o.html#15 "Atuan" - Colossal Cave in APL?

note while melinda's princeton site is moving ... previous ref:
https://www.garlic.com/~lynn/2011b.html#4 Rare Apple I computer sells for $216,000 in London

it still will live on at the wayback machine (aka www.princeton.edu) ... note a previous incarnation of the Melinda's webpages at "pucc.princeton.edu" used to have a copy of zork source for vm/cms ... while it has been gone for some time ... it also still lives on at the wayback machine:
https://web.archive.org/web/20010124044900/http://pucc.princeton.edu/~melinda/

--
virtualization experience starting Jan1968, online at home since Mar1970

Internal Fraud and Dollar Losses

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 30 Jan, 2011
Subject: Internal Fraud and Dollar Losses
Blog: Financial Crime Risk, Fraud and Security
Internal Fraud and Dollar Losses
http://www.bankinfosecurity.com/articles.php?art_id=3296&rf=2011-01-27-eb

from above:
Research Suggests Banks Don't Catch Most Internal Fraud Schemes. A new Aite research report proves internal fraud is more damaging than many financial-services companies realize.

... snip ...

When we were doing what is now frequently called "electronic commerce", I tried to include a requirement that anybody that had any contact in anyway with webserver needed a FBI background check. Obviously that didn't fly.

The issue is that information from previous transaction (skimming, data breaches, evesdropping, etc) can be used by crooks to perform fraudulent financial transactions. Have tried using a number of metaphors regarding the problem; dual-use vulnerability (same information needed by crooks is also required in dozens of business processes at millions of locations around the world), security proportional to risk (value of information to merchant is profit from transaction possibly a few dollars, value of information to crook is account balance or credit limit ... hundreds or thousands of dollars; crooks can afford to outspend attacking by possibly a factor of 100 times as merchant can afford to spend defending).

In the mid-90s, possibly because of work on electronic commerce, I was invited to participate in the x9a10 financial standard working group, which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments (i.e. ALL, internet, point-of-sale, unattended, high value, low value, debit, credit, gift card, ACH, wireless, contact, contactless, etc; aka ALL). One of the features of the resulting financial transaction standard was slight tweak to the current paradigm to eliminate information from current transactions as a risk (didn't do anything about skimming, evesdropping, data breaches, etc; it just eliminating crooks being able to use the information for fraudulent transactions).
https://www.garlic.com/~lynn/x959.html#x959

now the biggest use in the world today of SSL is this early thing we had done for electronic commerce to hide financial transaction details. However, the x9a10 work eliminated the need to hide such details ... so it also eliminates the major use of SSL in the world today.

--
virtualization experience starting Jan1968, online at home since Mar1970

1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
Newsgroups: alt.folklore.computers
Date: Mon, 31 Jan 2011 10:42:07 -0500
old post ... with parts of presentation I gave at fall68 SHARE meeting
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

starting with sysgen for MFT11 (on 360/67 mostly running as 360/65), I would take the stage1 output (stage2) and create independent jobs for most job steps and re-arrange the order of lots of things. The objective was to 1) run stage2 in production jobstream and 2) order files & PDS members for optimal arm seek motion. For lot of typical univ. workload, it achieved nearly three times thruput improvement.

a slight improvement from IBM came with release 15/16 where it was possible to specify VTOC location. VTOC was the highest used disk location and always on cylinder 0. Then careful placement of files and PDS would have highest used next to VTOC and decreasing used data at increasing cylinder locations. Starting with release 15/16, it was possible to place VTOC in the middle of the disk and arrange high use data on both sides of VTOC.

I had also done some work rewriting significant portions of cp67 to reduce pathlength ... the presentation also lists some numbers of that pathlength reduction.

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Mon, 31 Jan 2011 15:12:46 -0500
re:
https://www.garlic.com/~lynn/2011b.html#35 Colossal Cave Adventure in PL/I

as per referenced thread here in afc from 2010:
https://www.garlic.com/~lynn/2010d.html#64 Adventure - Or Colossal Cave Adventure

I was sent Advent_CMS.ZIP (2008) which contains complete set of source and executable files (along with necessary data).

and as also referenced, in ibm-main discussion there was reference to also one on file 269 on cbttape
http://www.cbttape.org/

other posts in this thread:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#32 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#34 Colossal Cave Adventure in PL/I

--
virtualization experience starting Jan1968, online at home since Mar1970

1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
Newsgroups: alt.folklore.computers
Date: Mon, 31 Jan 2011 19:35:08 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Hey Lynn - I'm just rereading Melinda's history of VM on my Kindle. (I wanted something to play with). I see your name is there several times.

re:
https://www.garlic.com/~lynn/2011b.html#37 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed

kindle version up here:
https://www.leeandmelindavarian.com/Melinda/neuvm.azw

as mentioned recently in linkedin ibm historic computing group ... after doing the kindle version ... and getting melinda to put up on her new webpage ... i was rereading ... and found i had comments on many of the pages ... misc. ibm historic computing post
https://www.garlic.com/~lynn/2011b.html#25 Melinda Varian's history page move

old email looking for the original cms source multi-level update process.
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908

it was implemented in EXEC that iterated multiple times for project that added 370 virtual machine simulation to cp/67 (370 had new instructions and virtual memory hardware tables were different from 360/67) ... aka "cp/67-h". Then the "cp/67-i" updates were changes to cp/67 to run on 370 (originally in cp/67 virtual machine).

i had lots of stuff archived from the early 70s and even some stuff from univ. in the 60s. these were replicated on multiple different tapes in the (same) almaden tape library. the above exchange was fortunately (shortly) before almaden had an operational problem where random tapes were mounted for "scratch" (managing to wipe out all my redundant copies).

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Tue, 01 Feb 2011 09:59:36 -0500
re:
https://www.garlic.com/~lynn/2011b.html#34 Colossal Cave Adventure in PL/I

for other "super-computer" topic drift ... old post with reference to jan92 meeting in ellison's conference room
https://www.garlic.com/~lynn/95.html#13

old email mentioning "medusa"
https://www.garlic.com/~lynn/lhwemail.html#medusa

the last email reference in the above
https://www.garlic.com/~lynn/2006x.html#email920129

was possibly only hrs before effort was transferred and we were told we couldn't work on anything with more than four processors. misc. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

Then came press on 2/17/92 with reference to being limited to scientific and technical only (aka eliminating commercial)
https://www.garlic.com/~lynn/2001n.html#6000clusters1

and then 11May92, comment that they were caught by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

being told we couldn't work on anything with more than four processors (as well as the earlier incident involving NSFNET backbone) contributed to our leaving later in '92.

Now two of the other people mentioned in the jan92 meeting in ellison's conference room, not long later show up at a small client/server startup responsible for something called the "commerce server". We were brought in to consult because they wanted to do payment transactions on the server (the small client/server startup had also invented this technology called "SSL" that they wanted to use). The result is now frequently called "electronic commerce".

part of the "electronic commerce" effort involved something called the "payment gateway" which handled payment transaction flow between "commerce servers" and the payment networks. misc. past posts mentioning payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Tue, 01 Feb 2011 15:20:37 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
I think it was written in PL/I(F), and has some TSO dependencies.

re:
https://www.garlic.com/~lynn/2011b.html#38 Colossal Cave Adventure in PL/I

the version i got 2008 Advent.CMS.ZIP uses TREAD/TWRITE for terminal input/output.


DCL TREAD ENTRY (CHAR(133),FIXED BIN(31),CHAR(133),
               FIXED BIN(31),FIXED BIN(31)) OPTIONS (ASM INTER);

DCL TWRITE ENTRY (CHAR(133),FIXED BIN(31),FIXED BIN(31))
OPTIONS (ASM INTER);

the ZIP file includes a "wellput.asm" .. that includes the following

WELLPUT  TITLE 'W E L L P U T -- WYLBUR/TSO I/O INTERFACE FROM PLI'     00010000
*********************************************************************** 00180000
• WELLPUT -                                                           * 00190000
• THE PURPOSE OF THIS MODULE IS TO SIMULATE THE I/O ROUTINES TREAD    * 00200000
• AND TWRITE USED BY THE ADVENTURE GAME.                              * 00210000
•                                                                     * 00220000
• CALLING SEQUENCES:                                                  * 00230000
•                                                                     * 00240000
• TREAD (PROMPT_MESSAGE,PROMPT_LENGTH, MESSAGE_AREA,LENGTH,RTN_CODE)  * 00250000
•                                                                     * 00260000
• TWRITE (MESSAGE,MESSAGE LENGTH,RETURN CODE)                         * 00270000
•                                                                     * 00280000
*********************************************************************** 00290000

basically maps TREAD/TWRITE to TGET/TPUT (tso) assembler macros (which are simulated under CMS):
http://publib.boulder.ibm.com/infocenter/zvm/v5r3/topic/com.ibm.zvm.v53.dmsa5/hcsd2b00424.htm

"GETIN: PROC" does "CALL TREAD(INSTR,0,INSTR,INLEN,CCODE)"

I have a different ADVENT.PLI that has no references to TREAD/TWRITE that was sent to me in 2007.

"GETIN: PROC" does "DISPLAY(' ') REPLY(CHRIS)"

... I guess there is always hercules (370 emulator that runs on intel platforms) which has a packaged vm/cms release 6 ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Productivity And Bubbles

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 01 Feb, 2011
Subject: Productivity And Bubbles
Blog: IBMers
The fall 2008 congressional hearings into the rating agencies had testimony that the rating agencies were selling triple-A ratings on toxic CDOs (when both the sellers and the rating agencies knew they weren't worth triple-A).

CDOs had been used in the S&L crisis to obfuscate the underlying mortgage values ... but w/o triple-A ratings they found little market. Being able to pay for triple-A ratings gave unregulated loan originators nearly unlimited supply of funds being able to immediately sell-off every loan they wrote w/o regard to loan quality or borrowers qualifications (the only limiting factor became how fast they could write the loans).

During the recent bubble/mess, estimate that $27T in triple-A rated toxic CDO transactions were done ... w/o having to resort to federal money.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

Loan originators found big new market in real estate speculators. Real estate speculators found no-documentation, no-down, 1% interest only payment ARMs could return 2000% in real estate markets with 20-30% inflation (with the speculation further fueling inflation and transaction turn-over ... possibly new mortgage every yr on same properties). These loans (and the real estate market) became the equivalent of the "Brokers Loans" that were responsible for the the 20s stock market bubble & crash (from 30s Pecora Hearings into the '29 crash and also resulted in Glass-Steagall).

Obfuscating the real-estate bubble and crash was the repeal of Glass-Steagall allowed unregulated risky investment banking arms of safe&secure regulated depository institutions to buy lots of the triple-A rated toxic CDOs and carry them off-balance. Estimate that at the end of 2008, the four largest too-big-to-fail financial institutions were carrying $5.2T "off balance"
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Earlier in 2008, several tens of billions of toxic CDOs had gone for 22cents on the dollar. If the $5.2T had been brought back on the books, the institutions would have been declared insolvent and had to be liquidated. On the front-end of the triple-A toxic CDO transactions were the huge speculation in the real-estate market, bubble and bust.

On the backend of the triple-A toxic CDO transactions was all the institutions holding the instruments and all the news trying to keep these too-big-to-fail institutions in business (shouldn't have been allowed to play in such risky business w/o the repeal of Glass-Steagall), was a big distraction from what was going on the real-estate bubble/bust side.

In the middle were all these new fees and commissions on this new way of packaging loans as triple-A rated toxic CDOs. There was report that the financial industry tripled in size (as percent of GDP) during the bubble and the NY comptroller had report that the aggregate wallstreet bonuses spiked over 400% during the bubble. Possibly aggregate of 15% (new fees & commissions) on the $27T would be around $4T.
http://www.businessweek.com/stories/2008-03-19/the-feds-too-easy-on-wall-streetbusinessweek-business-news-stock-market-and-financial-advice

With the gov. leaning over backwards to keep the institutions in business ... apparently a little thing like money laundering isn't a big deal ... when DEA followed the money trail used to buy drug smuggling planes:
http://www.bloomberg.com/news/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal.html
and
http://www.huffingtonpost.com/zach-carter/megabanks-are-laundering_b_645885.html?show_comment_id=53702542
and
http://www.taipanpublishinggroup.com/tpg/taipan-daily/taipan-daily-080410.html

The personal compensation on wallstreet was so great that it easily overrode any possible concern what the $27T in triple-A rated toxic CDO transactions might do to the institutions, the economy and/or the country. The business people were telling the risk dept. to fiddle the inputs until they got the desired outputs (GIGO)
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

How Wall Street Lied to Its Computers
http://bits.blogs.nytimes.com/2008/09/18/how-wall-streets-quants-lied-to-their-computer

a lot of the talk about the computer models were wrong frequently has been obfuscation and misdirection.

Subprime = Triple-A ratings? or 'How to Lie with Statistics
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/

There was recent note that GSEs (fannie/freddie) were responsible for something like $5T of the $27T ... supposedly the GSEs came to the triple-A toxic CDOs (mortgage securitization) rather late and that is why their percent of the market had fallen (in part because the GSE standards were so high)

The GSEs aren't w/o their problems ... along the lines of some of the too-big-to-fail institutions, but on a much smaller scale.I think it was CBS had news program that mentioned at one point, freddie had more lobbiests on their roles than employees (they apparently tried to put everybody in washington, all former congressman and high level gov. bureaucrats on retainer; lots of lobbying money for a GSE ... but couldn't touch the amount of lobbying money wall street was spending). There was item from 2008, that Warren Buffett had been the largest Freddie shareholder in 2000/2001 ... but sold all his shares because of the GSEs accounting methods.. Freddie had been fined $400m for $10B in inflated statements and CEO was replaced ... but CEO was allowed to keep tens (hundred?) of millions in compensation.

However, lots of companies were doing something similar ... even after SOX ... which put in all sorts of audit procedures and penalties for public company fraudulent financial filings. However, possibly because GAO didn't think SEC was doing anything (during the past decade), they started doing reports of uptick in public company fraudulent financial filings (even after SOX).
http://www.gao.gov/new.items/d061053r.pdf
and
https://www.gao.gov/products/gao-06-1079sp

so did SOX 1) have no effect on fraudulent financial filings, 2) encouraged fraudulent financial filings, or 3) if it wasn't for SOX, all financial filings would have been fraudulent (???).

The explanation was that executives boosted their bonuses from the fraudulent filings ... and even if the filings were later corrected, they still didn't have their bonuses corrected. Note also in the congressional Madoff hearings, one of the persons testifying, had tried unsuccessfully for a decade to try and get SEC to do something about Madoff (there was something about being worried that SEC hadn't done anything because it was heavily under the influence of criminal organizatons).

There was also something about that GSE CRA/subprime accounted for less than 10% of the $27T. During most of the bubble, the GSEs continued to do pretty much what they had been doing before the bubble. Real CRA/subprime hardly shows up as a small blip in the bubble. The bubble came from the whole rest of the market ... outside of CRA (and outside GSEs). You don't get 20-30% inflation in high-end real estate from anything remotely resembling CRA (which is where the majority of the $27T came from ... effectively by definition CRA properties have hard time accounting for even small percent of the $27T).

this has $10B in CRA done in 2001 with plans on having $500B total for the decade
http://findarticles.com/p/articles/mi_m0EIN/is_2001_May_7/ai_74223918/

that comes to about 2% of the $27T. There has been sporadic references to CRA as having contributed to the bubble ... but at 2% ... it is possibly obfuscation and misdirection.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

--
virtualization experience starting Jan1968, online at home since Mar1970

Productivity And Bubbles

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Feb, 2011
Subject: Productivity And Bubbles
Blog: IBMers
re:
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles

Part of the difference was that the original statement was that GSE CRA/subprime accounted for less than 10% of the mortgages (as opposed to 10% of the $27T). The CRA/subprime mortgages were way down at the low-end of the market ... so aggregate value of those mortgages was much smaller percentage of the overall $27T (the enormous inflation and bubble happened at other places in the market)
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

Somewhat like pulling out a single budget item that accounts for 2% of the $14T federal debt ($280B) and claim that is responsible for all of the federal woes (again obfuscation and misdirection).

One of the other issues (involving Buffett) in 2008 was the muni-bond market totally collapsed (investors realizing that if rating agencies were selling triple-A ratings for toxic CDOs ... how could any ratings be trusted). Buffett then stepped in and started offering "insurance" to get muni-bond market back up and running.

However, muni-bond market is also under pressure from collateral damage as part of the bubble collapse 1) real-estate bubble implosion cuts municipal revenue ... affecting their credit rating and 2) with all the speculation, there appeared to be more demand than there actually was, then builders built a lot of new development for the demand illusion, and municipalities sold bonds to put in utilities and streets for the new developments (when the new properties sold, the new revenue would cover the costs of the new facilities ... which has yet to happen).

Similarly, local banks have had collateral damage ... commercial builders borrowed to put in new strip malls for the new home developments. The new home developments aren't selling, the new strip malls don't have buyers/leases, the commercial builders have to default, and the local banks are collateral damage.

The speculators using no-documentation, no-down, 1% interest only payment ARMs treat the real-estate market like the 20s stock market (possibly 2000% ROI in regions with 20-30% real-estate inflation; these mortgages equivalent to the Brokers' Loans behind the 20s stock market boom/crash) ... the real-estate boom/crash has collateral damage that spreads out through much of the economy.

Early spring, 2009 I was asked to take the scan of 30s Senate Pecora hearings (scanned the previous fall at Boston public library and online at the wayback machine), HTML it with heavy cross-index and lots of references between what went on then and what happened this time. Apparently there was some expectation that the new congress had some appetite to do something ... however after doing quite a bit of work, I got a call saying it wouldn't be needed after all (no substantial difference between the new congress and congresses from the previous couple decades).

misc. past posts mentioning muni-bond market:
https://www.garlic.com/~lynn/2008j.html#9 dollar coins
https://www.garlic.com/~lynn/2008j.html#20 dollar coins
https://www.garlic.com/~lynn/2008j.html#23 dollar coins
https://www.garlic.com/~lynn/2008k.html#16 dollar coins
https://www.garlic.com/~lynn/2008k.html#23 dollar coins
https://www.garlic.com/~lynn/2008o.html#45 The human plague
https://www.garlic.com/~lynn/2008o.html#52 Why is sub-prime crisis of America called the sub-prime crisis?
https://www.garlic.com/~lynn/2008p.html#60 Did sub-prime cause the financial mess we are in?
https://www.garlic.com/~lynn/2008q.html#11 Blinkenlights
https://www.garlic.com/~lynn/2008q.html#20 How is Subprime crisis impacting other Industries?
https://www.garlic.com/~lynn/2009b.html#78 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#29 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009d.html#77 Who first mentioned Credit Crunch?
https://www.garlic.com/~lynn/2009e.html#8 The background reasons of Credit Crunch
https://www.garlic.com/~lynn/2009n.html#47 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2010f.html#81 The 2010 Census
https://www.garlic.com/~lynn/2010l.html#53 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010p.html#17 What banking is. (Essential for predicting the end of finance as we know it.)

--
virtualization experience starting Jan1968, online at home since Mar1970

Colossal Cave Adventure in PL/I

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Colossal Cave Adventure in PL/I
Newsgroups: alt.folklore.computers
Date: Wed, 02 Feb 2011 11:55:41 -0500
re:
https://www.garlic.com/~lynn/2011b.html#41 Colossal Cave Adventure in PL/I

the TREAD/TWRITE statements appears to Orvyl(/Wylbur?) system which predate TSO (tget/tput macros). Stanford was one of the univ. that got 360/67 for tss/360 ... and then when tss/360 ran into problems ... they did their own virtual memory systems (akin to MTS at univ of michigan) ... the Wylbur user interface/editor later being ported to OS/360 (and later MVS).

the implication was that the tread/twrite PLI version had originally been done for orvyl back in the 60s at stanford.

reference to (pdp/dec) sail (& adventure) at stanford
http://infolab.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html
another reference to (pdp/dec) sail (& adventure) at stanford
http://www.stanford.edu/~learnest/sailaway.htm
wiki page:
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure

the adventure translation from sail to orvyl at stanford would appear to be a naturual. The fortran version from stanford PDP machine to tymshare (nearby in silicon valley) PDP machine would have been straight-forward ... as well as tymshare porting fortran version from dec to vm/cms.

misc. past posts mentioning Orvyl
https://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging
https://www.garlic.com/~lynn/2007g.html#33 Wylbur and Paging
https://www.garlic.com/~lynn/2007m.html#62 nouns and adjectives
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2010e.html#79 history of RPG and other languages, was search engine history
https://www.garlic.com/~lynn/2010e.html#82 history of RPG and other languages, was search engine history
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2010k.html#11 TSO region size
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2011.html#86 Utility of find single set bit instruction?

--
virtualization experience starting Jan1968, online at home since Mar1970

Productivity And Bubbles

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Feb, 2011
Subject: Productivity And Bubbles
Blog: IBMers
re:
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#43 Productivity And Bubbles

long-winded early '99 post that discusses some number of the details of the current situation ... including in '89, citi had figured out that its ARM mortgage portfolio could take down the institution (even regulated safety&soundness), unloaded the portfolio, got out of the market, and required a (private/foreign) bailout to stay in business.
https://www.garlic.com/~lynn/aepay3.htm#riskm

Then, late '99, Bank Modernization ("GLBA") act passes. On the floor of congress the rhetoric was the main purpose of the bill was if you were already a (regulated, depository institution) bank, you got to remain a bank; if you weren't already a bank, you didn't get to become one (specifically calling out walmart and microsoft as examples). "GLBA" also repealed Glass-Steagall (out of the 30s Pecora hearings designed to prevent repeat of unregulated risky investment banks taking down safe&sound regulated depository institutions), and "opt-out" privacy sharing provisions (federal pre-emption of the cal. bill in progress that required "opt-in" for privacy sharing).

Roll foward into this century and a combination of all the regulatory repeal as well as lack of enformcement of regulations that did exist ... allowed a lot of isolated greed and corruption hot-spots to combined together into financial firestorm.

The unregulated loan originators were able to use paying for triple-A ratings and loan securitization (toxic CDOs) as nearly unlimited source of funds ... and also to write loans that didn't meet GSE/fanny/freddie standards ("standardless" loan market was new w/o a lot of compeitition ... since everybody else had been writing loans that met some minimum set of standards).

Wallstreet adored the $27T in new transactions ... since it was brand new major source of fees and comissions. Courtesy of Glass-Steagall ... unregulated investment banking arms of safe/sound depository institutions could play in the fees&commissions (otherwise prevented by regulatory safety & soundness restrictions). The major too-big-to-fail depository institutions didn't necessarily directly cause the real-estate market to be treated as the '20s stock market ... but they provided a lot of the funds that fueled the bubble.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

At end of 2008, four largest too-big-to-fail carrying $5.2T in triple-A rated toxic CDOs "off-balance"
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

Note the large triple-A rated toxic CDOs are effectively mostly a form of ARM mortgage portfolio and the largest holder is citi ... which required a new (gov) bailout to stay in business (institutional knowledge from '89 seemed to have evaporated).

Now there are periodic smokescreens attempting to ascribe the problems involving trillions of dollars were caused by problems that only involved billions of dollars (difference involving two to three orders of magnitude).

GLBA footnote ... TARP funds were a drop in the bucket fixing wallstreet ... the big bailout for wallstreet too-big-to-fail was the Federal Reserve for regulated depository institutions. They had been fighting release of information ... but courts finally got some of the information:
http://www.csmonitor.com/USA/2010/1201/Federal-Reserve-s-astounding-report-We-loaned-banks-trillions

including they are paying 98cents on the dollar for off-balance toxic assets (that had been going for 22cents on the dollar). However, all this was only to institutions that had bank charters. There were still a couple of large wallstreet investment banks that were in deep trouble ... and weren't "banks". Supposedly GLBA would have prevented them from becoming banks ... but federal reserve managed to give them bank charters anyway ... so they would also have access to federal reserve assistance.

And as to GLBA federal pre-emption of cal. financial institution "opt-in" privacy sharing. In the middle of last decade I was at an annual, national privacy conference at the Renaissance hotel in wash dc ... and they had a panel discussion with the FTC commissioners. Somebody in the back of the room said he was associated with financial industry "call centers" and he claimed that none of the people answering "opt-out" (privacy sharing) calls had any method to record information (GLBA allowed financial industry to use your privacy information unless they had record of your "opt-out" ... as opposed to cal. only allowing use of your privacy information if they had record of your "opt-in").

--
virtualization experience starting Jan1968, online at home since Mar1970

zLinux OR Linux on zEnterprise Blade Extension???

From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Feb, 2011
Subject: zLinux OR Linux on zEnterprise Blade Extension???
Blog: MainframeZone
re:
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#8 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#10 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#12 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#21 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???

repeat of old upthread reference to early Jan92 meeting in Ellison's conference room regarding massive parallel
https://www.garlic.com/~lynn/95.html#13

misc. old email involving both commercial (including DBMS) as well as technical and scientific cluster product
https://www.garlic.com/~lynn/subtopic.html#hacmp

and old email regarding work on cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

the last email in the above was possibly only hrs before the cluster scale-up was transferred and we were told we couldn't work on anything with more than four processors. A couple weeks later (2/17/92) there is announcement that it is only for technical and scientific
https://www.garlic.com/~lynn/2001n.html#6000clusters1

and repeat of upthread reference to "Release No Software Before Its Time" (parallel processing nearly two yrs ago):
https://www.garlic.com/~lynn/2009p.html#43
https://www.garlic.com/~lynn/2009p.html#46

we had worked with a number of RDBMS vendors that had "cluster" products for the old VAX/VMS cluster product ... and had a number of things to say about short-comings/bottlenecks in that implementation. So drawing on

(1) past (mainframe) loosely-coupled experience ... my wife did stint in POK responsible for loosely-coupled (mainframe) architecture where she developed Peer-Coupled Shared Data architecture ... misc. past posts
https://www.garlic.com/~lynn/submain.html#shareddata
which saw little uptake until sysplex (help account for her not staying long in the position)

(2) avoiding various VAX/VMS cluster deficiencies ... but emulating the API ... minimizing effort needed for RDBMS vendors to port from VAX/VMS cluster to HA/CMP cluster.

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 02 Feb, 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
with regard to FBA mentioned upthread

misc. past posts mentioning CKD & FBA
https://www.garlic.com/~lynn/submain.html#dasd

including references to being told that even if I gave POK favorite son operating system, completely integrated & tested FBA support, I still needed a $26M business case (to cover documentation & training). I wasn't able to use lifetime costs for business case, it had to be incremental new disk sales; possibly $200m-$300m sales to have profit to cover $26M (and then claim was FBA support would just result in customers buying same amount of FBA as CKD).

currently all CKD is emulation on real FBA.

one of the issues for MVS and explosion in (vm370) 4341 installs ... is that 3370 FBA was the "mid-range" disk technology (3380 was high-end CKD but no CKD mid-range). MVS could somewhat play when 4341 was used to replace earlier 370s with existing CKD. eventually they came out with 3375 for MVS mid-range (i.e. CKD emulation on top of 3370).

past posts in this thread:
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#29 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#33 A brief history of CMS/XA, part 1

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Wed, 02 Feb 2011 23:23:05 -0500
re:
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#60 Speed of Old Hard Disks

from long ago and far away ...

Date: 03/15/87 11:51:24 PST
From: wheeler

re: spring '85, processor cluster memos;

Technology is getting there. Couple weeks ago heard mention of Fujitsu 3mip vm/pc 370 card available later this year. Recently somebody mentioned that when ordering 700+mbyte, 3380-performance, 5.25 harddisk, the vendor asked him if he would prefer to wait 3 months for shipment of double density drives. Friday night somebody mentioned introductory offer on a populated 4megbyte memory board for $650.

Shouldn't be too long before there could be a board with 16meg memory and 5mip 370 (somewhere in the $1k-$3k range).

1) with dirt cheap memory, systems have got to come up with new innovative methods of using that memory effectively. For instance UNIX pipes rather than DASD scratch files can avoid significant amounts of DASD I/O.

2) systems leveraging price/performance of the large quantity mass produced boards for computational complexes.

3) 3380 in 5.25 packaging sizes

4) 3380 frames housing hoards of 5.25 drives with control unit that stripes data across multiple drives (4k FBA with 256bytes blocks on multiple different drives). Data is recoverable even in cases of multiple drive failures. More efficient and better recovery than DASD mirroring. And/or lots of memory for full-track caching in the control unit, super-large tracks (i.e. *8 or *16 real track multiple) and potential of simultaneous data transfer on all drives (plus all the recoverability, say 16*3mbyte=48mbyte aggregate data transfer).

What other ways are there to leverage some of the consumer electronic business? Can a japanese compact disk player be turned into a cheap fiber optic modem? and/or would it be cheaper to post-manufacter the compact disk player into a modem than it would be to set up a separate line to just do a modem (getting both a compact disk drive and a fiber modem in single package ... using common reed-soloman error chip and other components)?


... snip ... top of post, old email index

the '85 processor cluster memos were referenced in this old post
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor

misc. old email mentioning 801, iliad, romp, risc, etc
https://www.garlic.com/~lynn/lhwemail.html#801

old posts mentioning getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

this is old email with special head that could read/write 16+2 tracks simultaneously ... somewhat similar data rate:
https://www.garlic.com/~lynn/2006s.html#email871230

in the late 70s, one of the engineers in bldg.14 got patent on what would later be called RAID. It was eventually used by s/38 (because disk failure was so tramatic in s/38 environment). recent ref
https://www.garlic.com/~lynn/2011.html#14 IBM Future System

--
virtualization experience starting Jan1968, online at home since Mar1970

vm/370 3081

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 03 Feb, 2011
Subject: vm/370 3081
Blog: IBM Historic Computing
from long ago and far away ...

Date: 01/23/83 13:47:38
From: wheeler

re: vm1; we have a 3081k which is nominally rated at two 7 mip processors. 3033 (&3081d) are rated at 4-5 mip processors. 3081k is pretty much a 3081d with a cache that is twice as large. The nominal mip rating is based on expected cache miss ratios & doubling cache size would normally imply cutting the cache miss ratio and improving nominal mip rate.

Lots of programs that we ran on the 3033 actually had very tight code, i.e. they ran with very low cach miss ratios ... & in fact would run in the neighborhood of 7 mips. That type of program would only get marginally higher mip rates when moved to a 3081k.

There is another factor tho. We ran the 3033 with a "UP" version of the CP software. To support the 3081k (which is an MP), reguired generating an "MP" version of the CP software. The "MP" version of the CP software can increase the CP pathlength by 15-50%. What does that mean for a particular program? If the program executes with a a 50% virtual/50% supevisor overhead ratio ... then the movement to the 3081k will increase the supervisor overhead percentage by 15-50% ... but also execute the total number of instructions 5%-40% faster (possibly resulting in a net elapsed time decrease, possibly not).

Secondly, there are two processors when there used to be one. Offshift and weekend interactive usage will see little effect from the two processors. The total elapsed time for executing multiple, long running programs should be cut in half. Finally there are the QCHEM programs. When they knew there was only one processor ... they only started up one QCHEM virtual machine. Now that there are two processors ... they have two QCHEM virtual machines running.

Finally, if the program is not CPU bound, but requires a large amount of paging &/or minidisk I/O ... the I/O is going to take the same amount of elapsed time as previously (i.e. if 60% of the elapsed time involves I/O ... that portion will remain constant and the change over to the 3081K will only affect the remaining 40%).


... snip ... top of post, old email index

past tales about VM/SP1 changes significantly increased (vm370) kernel multiprocessor overhead ... changes primarily targeted at improving virtual ACP/TPF thruput. 308x was initially announced with multiprocessor only but ACP/TPF didn't have multiprocessor support. VM/SP1 multiprocessor support was reworked attempting to increase the concurrency of overlapping virtual machine simulation functions (like I/O) with virtual machine execution (specifically targeted reducing ACP/TPF elapsed time by increased overlap w/multiprocessor). However, this restructuring increased the overall multiprocessor overhead (for special case ACP/TPF concurrency ... attempting to make use of 2nd otherwise idle processor in a dedicated ACP/TPF environment) for all customers
https://www.garlic.com/~lynn/2007.html#email820512 .
https://www.garlic.com/~lynn/2007.html#email820513 .
https://www.garlic.com/~lynn/2007.html#email860219

a reference to 3081 being left-over, very expensive FS technology
http://www.jfsowa.com/computer/memo125.htm

wiki page:
https://en.wikipedia.org/wiki/IBM_3081

eventually 3083 was created motivated by ACP/TPF market ... basically 3081 with one processor removed. A slight problem was simplest was remove processor-1 ... leaving processor-0 ... but processor-0 was located at the top of the 3081, which would have left 3083 potentially dangerously top-heavy.
https://www.garlic.com/~lynn/2009r.html#70 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010i.html#78 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010n.html#16 Sabre Talk Information?
https://www.garlic.com/~lynn/2011.html#19 zLinux OR Linux on zEnterprise Blade Extension???

other posts mentioning 3083
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002m.html#67 Tweaking old computers?
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
https://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#8 CCD technology
https://www.garlic.com/~lynn/2005.html#22 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005s.html#7 Performance of zOS guest
https://www.garlic.com/~lynn/2005s.html#38 MVCIN instruction
https://www.garlic.com/~lynn/2006d.html#5 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
https://www.garlic.com/~lynn/2007.html#33 Just another example of mainframe costs
https://www.garlic.com/~lynn/2007.html#44 vm/sp1
https://www.garlic.com/~lynn/2007g.html#16 What's a CPU second?
https://www.garlic.com/~lynn/2007o.html#37 Each CPU usage
https://www.garlic.com/~lynn/2007t.html#30 What do YOU call the # sign?
https://www.garlic.com/~lynn/2008c.html#83 CPU time differences for the same job
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008g.html#14 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2008i.html#38 American Airlines
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#51 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#57 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing
https://www.garlic.com/~lynn/2009g.html#66 Mainframe articles
https://www.garlic.com/~lynn/2009g.html#68 IT Infrastructure Slideshow: The IBM Mainframe: 50 Years of Big Iron Innovation
https://www.garlic.com/~lynn/2009g.html#70 Mainframe articles
https://www.garlic.com/~lynn/2009h.html#77 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2009l.html#55 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009l.html#65 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009l.html#67 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009m.html#39 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2010.html#1 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2010.html#21 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#14 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010i.html#24 Program Work Method Question
https://www.garlic.com/~lynn/2010j.html#20 Personal use z/OS machines was Re: Multiprise 3k for personal Use?

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 04 Feb 2011 10:18:54 -0500
re:
https://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks

old email mentioning processor clusters, they wanted me in ykt all week on the subject ... but I was also scheduled to be in DC with NSF.

I was being pressured to doing the processor cluster all week and getting somebody else to handle the NSF high-speed networking presentation
https://www.garlic.com/~lynn/2007d.html#email850315
also mentioned here
https://www.garlic.com/~lynn/2011b.html#34 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#40 Colossal Cave Adventure in PL/I

Date: 03/14/85 12:25:05
From: wheeler

going to be here on monday? going to dc for nsf meeting on tuesday?

I figure I can take red-eye to NY monday night & get a flight to DC in time for the nsf meeting. can be back in ykt some time on weds

did i blow enuf smoke on processor clusters?


... snip ... top of post, old email index, NSFNET email

... also leading up to the above:

Date: 03/13/85 10:32:58
From: wheeler

you've haven't said much recently .. i'll be in ykt at least next thursday ... if not weds. also. I'll be in wash. dc on tuesday for NSF pitch in the afternoon.


... snip ... top of post, old email index, NSFNET email

I had to get somebody else to do the pitch to NSF director so I could do the processor cluster meetings in YKT ... the day before that

Date: 03/12/85 12:51:18
From: wheeler

Univ. of Cal. has gotten a $120m NSF grant to set-up a super computer regional center in San Diego. I've been asked to make a pitch to them at Berkeley ... getting lots of data into & out of the center ... apparently this work is beyond the leading edge of anything else out there.

As a political move, in the past two days, a presentation was set-up with the head of NSF for next Tuesday so he can get the pitch before UofC sees it.


... snip ... top of post, old email index, NSFNET email

older network information

Date: 09/16/83 10:44:22
From: wheeler

re: csnet/arpanet; Arpanet is getting very restrictive about connection ... just about have to have DoD contract. CSNET started up with a NSF grant and has a large number of universities and other locations have connection. Quite a few of the CSNET locations have Arpanet gateway connections which provide mail forwarding facilities.

Both SJR & YKT have connections to CSNET machines which will transport mail across the boundary. In addition, there is an external VNET network called BITNET which currently has (at least) 30-40 locations. YKT & CAMB. have connections to BITNET.

Implemented IBM security procedures currently require only "authorized" nodeid/userid use of the gateways (i.e. the IBM gateway machines will only process mail being sent by/to authorized and registered nodeid/userid). Other mail is returned unsent.


... snip ... top of post, old email index

old reference to original SJR connection to CSNET
https://www.garlic.com/~lynn/internet.htm
including
https://www.garlic.com/~lynn/internet.htm#email821021

other old email mentioning NSFNET
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

later processor cluster email
https://www.garlic.com/~lynn/lhwemail.html#medusa

as part of ha/cmp cluster scale-up
https://www.garlic.com/~lynn/subtopic.html#hacmp

misc. high speed networking (HSDT) posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 04 Feb 2011 11:12:22 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Univ. of Cal. has gotten a $120m NSF grant to set-up a super computer regional center in San Diego. I've been asked to make a pitch to them at Berkeley ... getting lots of data into & out of the center ... apparently this work is beyond the leading edge of anything else out there.

As a political move, in the past two days, a presentation was set-up with the head of NSF for next Tuesday so he can get the pitch before UofC sees it.


re:
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks

some folklore at the time was that everybody figured that the supercomputer would go to berkeley ... but supposedly the UofC regents had schedule that UCSD would get the next new bldg. ... and some big portion of the NSF supercomputer grant would go for constructing a new bldg ... and so the regents decided that the supercomputer center had to be done in San Diego.

--
virtualization experience starting Jan1968, online at home since Mar1970

A brief history of CMS/XA, part 1

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Feb 2011
Subject: A brief history of CMS/XA, part 1
Blog: IBM Historic Computing
other bits & pieces from long ago and far away

Date: 8 November 1983, 10:51:29 EST
From: <*guess who*>

Greetings from the Wilds of Westchester! This is the (re) starting of a network discussion on CMS/XA. I would like to estalish this as a forum in which our plans to add extended architecture support to CMS can be discussed. It is intended that this be in large a technical discussion, but it is by no means restricted to that.

If you wish to participate, please send me a note. Since this distribution list is rather old and rusty in places, I request that everyone who wants to participate send me a note. If I don't hear from you, I'll assume that you choose not to participate. If you know of others who should be on this list, please send me their names and nodes, or have them contact me.

I would like to start things off by suggesting some reading. If you need a copy of any of the following, send me a note with your mailing address (internal or external as appropriate), and I'll send the item to you.

1. Is CMS Ready for Extended Architecture?, Pat Ryall, The Amdahl Corporation, dated 20 May 1983. Personal memo available on VMSHARE. Pat now works for IBM (RYALL at LSGVMB).
2. CMS Architecture and Interactive Computing, Charles Dainey,Tymshare, Inc., dated 22 August 1983. Presentation made at SHARE 61.

Both of the above are available on VMSHARE. The following papers are of general interest, but I think they relate strongly to issues of system design:
3. Nanotrends, Nicolas Zvegintzov, DATAMATION, August 1983, pp. 106-116. ...interesting paper on what is important in future systems.
4. Hints for Computer System Design, Butler Lampson, Proceedings of the 9th SOSP. ...excellent reading on system design and implementation.

Finally, I have the CMS/XA objectives document ready for your review. Please send me your mailing address and I'll send you a copy.


... snip ... top of post, old email index

and

Date: 05/17/84 12:40:44
From: wheeler

there is cms/xa meeting around palo alto next week on the 22nd & 23rd including meetings with INTEL, LSI, and Signetics.


... snip ... top of post, old email index

and

Date: 05/17/84 12:42:31
From: wheeler

re: cms/xa; i assume that message was invitation ... I accept and am planning on attending. It might be worthwhile to also invite xxxx-xxxxx from the ACIS as an observer. I know it isn't directly the subject of the meetings ... but I have a feeling with that set of customers there will also be comments about UNIX.


... snip ...
top of post, old email index

past posts in this thread:
https://www.garlic.com/~lynn/2011b.html#19 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#20 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#23 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#29 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#33 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1

--
virtualization experience starting Jan1968, online at home since Mar1970

Productivity And Bubbles

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Feb 2011
Subject: Productivity And Bubbles
Blog: IBMers
re:
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#43 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#45 Productivity And Bubbles

The initial draft of Basel-II had new "qualitative section" ... basically in addition to old standard quantitative measures for risk adjusted capital ... "qualitative" would have required something slightly analogous to iso9000 ... top financial executives demonstrate that they understood the end-to-end business processes. during the review process ... it was all effectively eliminated. There are lots of complaints about the level of risk adjusted capital in Basel3 ... but somebody observed recently that most european institutions already are operating with nearly twice the risk adjusted capital called for in Basel3 (while comments that if too-big-to-fail institutions had to meet basel3 risk adjusted capital it would tank the economy).

This morning on business cable station ... they had some presidential economic adviser from 30-40 yrs ago ... who claimed that there has been lots of number fiddling going on to obfuscate and misdirect what has happened. One comment was that wallstreet and market has had all sorts of stimulus and help ... but mainstreet hasn't ... eliminating all the statistics and hype ... supposedly the total number employed at the moment is the same as were employed in the 90s (factor out all the ups & downs, the economy is stagnant for the last 15yrs or so (while population has increased significantly).

There have been comments that this has been substitute for doing (30s) Pecora hearings and having to take any substantive action ... along with jokes about the (tens?) billions spent on FIRE ... financial, insurance, real-estate ... lobbying over the past 15yrs.
https://www.amazon.com/Financial-Crisis-Inquiry-Report-ebook/dp/B004KZP18M

When everything was falling apart here in the US ... there were references to how far around the world would the effects propagate. This was about the same time there was article about ratio of US executive compensation to worker compensation had exploded to 400:1 after having been 20:1 for a long time and 10:1 in most of the rest of the world. An article was written claiming that the too-big-to-fail problems wouldn't hit china because their financial institutions were much better regulated AND also that their financial executives had total compensation in the hundred of thousand range (not tens & hundreds of millions). There was also some claim that concessions/benefits that wallstreet received from congress was over 1000 times what they spent "lobbying".

--
virtualization experience starting Jan1968, online at home since Mar1970

Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 04 Feb 2011
Subject: Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2011b.html#11 Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster

the recent, re-written magstripe wiki still retains some los gatos lab reference
https://en.wikipedia.org/wiki/Magnetic_stripe_card

IBM has a new "100 yr" history site ... and has magstripe entry
http://www.ibm.com/ibm100/us/en/icons/magnetic/

but doesn't mention los gatos lab involvement.

the ATM 3624 wiki still mentions los gatos lab:
https://en.wikipedia.org/wiki/IBM_3624

--
virtualization experience starting Jan1968, online at home since Mar1970

Speed of Old Hard Disks

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Speed of Old Hard Disks
Newsgroups: alt.folklore.computers
Date: Fri, 04 Feb 2011 18:18:28 -0500
re:
https://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks

more topic drift ... mostly NSF backbone stuff ... but also references getting suck into doing this stuff I was also working for large numbers of processors in clusters ... the cluster stuff gets somewhat stalled but is revived in later ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

of course all of this is before internal politics shutting most of it down (at least externally). Then head of NSF tried to help with letter to corporation ... but that just aggravated the internal politics; ref
https://www.garlic.com/~lynn/2006s.html#email860417

old nsfnet email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

As previously mentioned, I couldn't do the HSDT pitch at NSF because I was also doing all this stuff for processor clusters involving lots of meetings on the east coast; referenced in the attached.

Date: 03/25/85 09:05:58
From: wheeler
To: UCB marketing rep

re: hsdt; my phone is still not being answered so I'm not getting any messages. I was on east coast tues-friday of last week. Something has come up & will have the same schedule this week. Will also be back there the week of april 19th.

HSDT pitch to NSF & Erich Bloch last week went extremely well. He not only wants to have a back-bone system to tie together all the super-computer centers ... but NSF appeared to have been extremely impressed with the technology and would like to work with us at a technology level.

How is the UCB scheduling going???


... snip ... top of post, old email index, NSFNET email

Date: 03/25/85 16:11:50
From: wheeler
To: other UCB marketing rep

I'm currently available on the 15th or anytime the week of the 22-26. Presentation will essentially be same given to head of NSF for use as backbone to tie all supercompter centers together. Will also be making the proposal for a couple of university type applications in addition to using it as backbone for govenment labs. (i.e. livermore, etc.). Block also expressed interest in the research aspects of the technology we are using for communicating.


... snip ... top of post, old email index, NSFNET email

Date: 03/26/85 16:00:23
From: UCB marketing rep
To: wheeler
Cc: other UCB marketing rep

Lynn

I received you note of 3-25-85 re. your presentations to NSF. We have generated interest in the Vice Chancelorfor Computing, Raymond Neff. Could you provide us with a couple of dates the week of April 12 for planning purposes.


... snip ... top of post, old email index, NSFNET email

Date: 04/02/85 11:02:50
From: wheeler

HSDT was pitched to head of NSF & some of the staff about a week and a half ago. There were very favorably impressed ... NSF would like to use it to tie together all the super computer centers (backbone network). They also expressed a lot of interest in the HSDT technology & would like to have some meetings out here in SJR with various NSF technical people ... if possible have some joint projects.

I'm meeting with some ACIS people this afternoon who have heard of the results and are anxious to use HSDT as a lever to get IBM more involved with the super computer center.

I'm also presenting HSDT to the UofC people next thursday in Berkeley. UofC has interest in using HSDT to tie together their campuses and also use it as a "feeder" to the San Diego super computer center.


... snip ... top of post, old email index, NSFNET email

other old hsdt email
https://www.garlic.com/~lynn/lhwemail.html#hsdt

and later cluster scale-up became the supercomputer
https://www.garlic.com/~lynn/lhwemail.html#medusa

--
virtualization experience starting Jan1968, online at home since Mar1970

Productivity And Bubbles

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Feb, 2011
Subject: Productivity And Bubbles
Blog: IBMers
re:
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#43 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#45 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#53 Productivity And Bubbles

Seeing the review process for Basel-2 cut a lot of stuff out from the initial draft ... and then various regulators (especially in the US) exempt lots of stuff from Basel consideration (by allowing it to be carried off book/balance) ... I can't really blame BIS. Some observation about European institutions carrying higher risk adjusted capital than even called for in Basel-3 ... while lots in the US claiming Basel-3 would tank the economy ... would tend to highlight opposing national country forces, that BIS has to deal with.

In that sense US regulators would be in collusion with the too-big-to-fail ... institutions permitting the four largest too-big-to-fail to have $5.2T in toxic assets being carried off balance at the end of 2008. Note that some number of transactions involving tens of billions of those toxic assets had sold earlier in 2008 at 22cents on the dollar. If those institutions had been required to bring those toxic assets back "on book" ... they would have been declared insolvent and had to be liquidated (way beyond anything they would have been capable of by allocating additional risk adjusted capital).

As mentioned earlier ... it wasn't like the issues weren't well understood ... as per long-winded post from jan1999
https://www.garlic.com/~lynn/aepay3.htm#riskm

and some more old GEEK ... leading up to NSFNET backbone (and modern internet)
http://lnkd.in/JRVnGk

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Sun, 06 Feb 2011 10:40:58 -0500
"Joe Morris" <j.c.morris@verizon.net> writes:
IBM could have done a lot of things with the S/360 design, but I'll have to disagree with the ieda that it would have been better had 14xx emulation been part of the basic architecture. Looking at the /360 line as you go up the model list you find less and less microprogramming; adding emulation features to the standard architecture definition would have been extremely expensive for the high-end boxes, which typically would be found in shops where it would serve no purpose whatever. Note that 14xx emulation was available in the low-end machines (which often did replace 14xx boxes) and 7090 emulation was available on the /65. (I don't recall what, if any, emulation was available on the /50, and I don't recall any emulation features on the /75 and above. Lynn?)

And for my purposes at the PPOE where we had a /40, I could not have cared less about word marks: we had users on a 7040 who moved their applications to the /40: what bothered these users was the difference in standard-precision floating point. The only 14xx application that was brought over to the /40 was the SYSIN/SYSOUT spooling utility (a total rewrite of the IBM-provided IOUP utility that I've mentioned in a recent posting).

And even with emulation as an extra-cost option you had the problem of shops that upgraded their hardware from a 14xx to a low-end S/360, then continued to run their 14xx programs for years as if nothing had changed. (One shop near my PPOE was still doing that in the 1970s...with the added silliness of punching out cartons of cards on a /40 emulating a 1401, manually sorting them, reading them back in, and discarding the cards...) Making emulation an extra-cost item encouraged the bean counters to look for shops to move away from emulation, at least on leased machines.


the claim is that the 75 was a "hard-wired" 65. Originally the models were 360/60 & 360/70 ... with 1microsecond memory. some enhancement resulted in 750ns memory and the models were renamed 360/65 & 360/75 (to reflect the faster memory) and the designation 360/60 & 360/70 were dropped (I don't believe any 60s & 70s were actually shipped). The 360/67 was 360/65 with virtual memory hardware added.

then you get into 91, 95, & 195.

I never used 360/50 and don't have any recollection of any statement about emulation features on 50. I do know that there was special microcode assist for CPS (conversational programming system, ran under os/360). Also, science center had wanted a 360/50 to build prototype virtual memory (pending availability of 360/67) but had to settle for 360/40 (doing cp/40 before morphing to cp/67 when 360/67 became available) because all the available 360/50s were going to FAA ATC. There would be statement in 360/50 functional characteristic ... but there isn't one up at bitsaver.

there is some folklore that the big cutover from lease to purchase in the early 70s was some executive about ready to retire and wanted to really boost their bonus as he was going out the door.

long ago and far away I was told that in the gov. hearings into ibm ... somebody from RCA(?) testified that all the computer vendors realized by the late 50s that the single most important feature was to have a single compatible architecture across the complete machine line (businesses were in period of large growth ... start with smaller machine and then having to move up to larger machines ... really big inhibitor to market was software application development, being able to re-use applications significantly helped selling larger machines). The statement wase that IBM had the only upper management that were able to force the individual plant managers (responsible for different product lines) to conform to common architecture. While IBM may have lost out in some competition for specific machines in niche markets ... being the only vendor selling the single, most important feature ... common architecture ... allowed IBM to dominate the market (only vendor selling the single, most important feature ... would have allowed them to get nearly everything else wrong and still be able to dominate the competition).

--
virtualization experience starting Jan1968, online at home since Mar1970

Other early NSFNET backbone

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Other early NSFNET backbone
Newsgroups: alt.folklore.computers
Date: Sun, 06 Feb 2011 11:46:54 -0500
re:
https://www.garlic.com/~lynn/2011b.html#30 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#32 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#40 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011b.html#48 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011b.html#50 Speed of Old Hard Disks

Date: 04/09/85 08:57:44
From: wheeler
To: distribution

re: hsdt; fyi; I met with a couple of ACIS people last week vis-a-vis NSF, HSDT, & super computers ... I presented the HSDT direction and various uses that we are making &/or planning for xxxxxx for data transport. They mentioned something about making it ACIS strategic direction. They would particularly like to ride the HSDT coat tails into the super-computer centers. I also told them about having prior contacts with the 10-meter telescope people. The ACIS people asked if I would also be willing to also present to the space telescope people.

I will be presenting HSDT to UofC systems on Thursday in Berkeley, ACIS people requested if they could send a representitive to the meeting.

A combined NCAR/NSF meeting is being set up for the last week of April in Boulder to discuss using HSDT to tie together the 20 universities accessing the NCAR machines in Boulder (also the possibility of integrating the HSDT network for NCAR & the super computer centers).


... snip ... top of post, old email index, NSFNET email

and

Date: 11/06/85 11:22:22
From: wheeler
To: ibm earn

Is there any more follow-up about IBM/Europe supporting a SLAC/CERN link?? The HSDT backbone for the super computer centers is progressing. Currently ACIS is "carrying" the proposal for an NSF joint study with HSDT nodes at each of the centers. In addition, I'm going up to Berkeley on Friday to discuss a bay area node(s) for access to the super computer centers. Berkeley & Stanford are trying to co-ordinate something. It might be possible for there to be a tie-in between SLAC and the Stanford legs. That could make the CERN connection easier.


... snip ... top of post, old email index, NSFNET email

The above ibm earn email was to person that had sent this email the year before:
https://www.garlic.com/~lynn/2001h.html#email840320

As previously mentioned ... internal politics resulting in all sort of road blocks and stalling and eventually shutdown external activity. various NSFNET related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

Note, SLAC/CERN were "sister" institutions, sharing some amount of software and staff tending to visit each other (&/or take sabbaticals at each others institutions) frequently. SLACs reference to 1st website outside CERN:
http://www.slac.stanford.edu/history/earlyweb/index.htm

SLAC was big vm370 institution and for a long time hosted BAYBUNCH (monthly bay area vm370 user group meeting) ... SLAC was on Sand Hill road and had linear accelerator that crossed beneath interstate 680 (aka Stanford Linear Accelerator Center)
http://www.slac.stanford.edu/

Berkeley 10m telescope was renamed "Keck" when they got funding from Keck foundation
http://www.keckobservatory.org/
https://en.wikipedia.org/wiki/W._M._Keck_Observatory

EARN was the European equivalent to BITNET ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

NCAR also had one of the early NAS/SAN using HYPERchannel and remote device adapters. IBM mainframe staged data to IBM disks ... and then loaded disk "channel programs" into HYPERchannel (A515) remote device adapter (simulated IBM mainframe channel) which various other (non-ibm) mainframes in the HYPERchannel network could invoke (allowing direct data transfer from the ibm disks to the non-ibm mainframes w/o having to pass through the ibm mainframe).
http://www.ucar.edu/educ_outreach/visit/

In the early 90s with prompting ... lots of gov. labs would attempt to commercialize internal technologies ... with the NCAR NAS/SAN being ported to AIX and being marketed as "Mesa Archival".

NCAR's NAS/SAN was also largely behind the latter work to support "3rd party transfers" for HiPPI switches & IPI disks.

For other drift Los Alamos had technology that was packaged and being marketed by General Atomics (in san diego). General Atomics eventually also had contract to operate the UCSD supercomputer center. When we were working with Livermore commercializing their filesystem as Unitree (in conjunction with HA/CMP) ... there was a lot of activity with General Atomics.
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?reload=true&arnumber=113582

more recent
http://www.datastorageconnection.com/article.mvc/UniTree-A-Closer-Look-At-Solving-The-Data-Sto-0001

misc. past posts mentioning "Mesa Archival":
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#15 Device and channel
https://www.garlic.com/~lynn/2005e.html#16 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2006n.html#29 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2007j.html#47 IBM Unionization
https://www.garlic.com/~lynn/2008p.html#51 Barbless
https://www.garlic.com/~lynn/2009k.html#58 Disksize history question
https://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2010m.html#85 3270 Emulator Software

--
virtualization experience starting Jan1968, online at home since Mar1970

Productivity And Bubbles

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Feb, 2011
Subject: Productivity And Bubbles
Blog: IBMers
re:
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#43 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#45 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#53 Productivity And Bubbles
https://www.garlic.com/~lynn/2011b.html#56 Productivity And Bubbles

Asset management division's stumbles tarnish Goldman Sachs
http://www.washingtonpost.com/wp-dyn/content/article/2011/02/05/AR2011020504529.html

Paper version of the above had comment that it is all a game to them ... money is just how score is kept. One of the "game" metaphors during the height of the bubble was "musical chairs" ... wondering who would be left holding the toxic assets when the bubble burst (music stopped).

Goldman's TARP funds came about the time and about the same as 2008 compensation payout for having lost money in 2008:
http://abcnews.go.com/WN/Business/story?id=6498680&page=1

from above:
Goldman Sachs, which accepted $10 billion in government money, and lost $2.1 billion last quarter, announced Tuesday that it handed out $10.93 billion in benefits, bonuses, and compensation for the year.

... snip ...

Goldman was also one of the investment banks that got bank charters (in theory would have been precluded under the "prime purpose" of GLBA, discussed up thread). Easy federal reserve funds helped them (and others) make a bundle and to pay back TARP funds. Could be considered trying to help preserve the 400+% spike in wallstreet bonuses during the bubble (even after the bubble had collapsed and institutions were loosing money).
http://www.businessweek.com/#missing-article

Note that the original regulator that was directed by Reagan to cut the reserve requirements in half refused ... and was replaced by somebody that would follow directions. Much of the freed up reserves then disappeared into wallstreet, never to be seen again (and was major part of S&L crisis). That regulator then was rewarded with very lucrative prestigious job on wallstreet. some of this also discussed in jan1999 post
https://www.garlic.com/~lynn/aepay3.htm#riskm

Note in longwinded thread in (linkedin) "Financial Drime Risk, Fraud and Security" group (still closed) titled "What do you think about fraud prevention in the governments?" ... there is mention about a recent book "America's Defense Meltdown" ... but the venality of wallstreet dwarfs the pentagon's ("13 Bankers: The Wallstreet Takeover and the Next Financial Meltdown" and "Griftopia; Bubble Machines, Vampire Squids, and the Long Con That is Breaking America"). Piece of it is archived in this post:
https://www.garlic.com/~lynn/2011.html#53

The "Defense Meltdown" frequently cites Boyd ... I had sponsored Boyd's briefings at IBM ... and some of the pentagon/wallstreet is also mentioned in (open linkedin) Boyd discussion group
http://www.linkedin.com/groups/Disciples-Boyds-Strategy-1015727?trk=myg_ugrp_ovr
"America's Defense Meltdown" ... open URL
http://lnkd.in/XtjMSM

Especially the "Griftopia" reference goes into some detail that the parasitic/game activity has been going on forever ... periodically getting out of control and nearly killing the host (normally parasites are dependent on their hosts, killing the host can result in their own demise).

There have been some published studies that claims a large percentage are amoral (going well beyond simple greedy) ... which would require large amounts of adult supervision.

the other periodic reference is sociopath and/or amoral sociopath ... the issue is they know how they got their money ... but claim is that many have no sense that ethics or morality is any way involved. another study claimed that small percent of children make it past 4-5 yrs old w/o acquiring a sense of morality ... and if they don't have it by then ... they grow up to never have it.

slightly related ... but different ... are periodic references to business ethics is an oxymoron ... where people that appear to otherwise have a sense of morality ... don't see it as applicable in a business context.

in any case, there can be no expectation of do right by these individuals (requiring large amount of structured environment and oversight).

quote recently seen on facebook

FDR in 1936:"We know now that Government by organized money is just as dangerous as Government by organized mob." Obama in 2011:????

for a little more background, two trillion dollar meltdown starts about 1970 and moves forward ... looking at all sorts of happenings over the past 40 yrs (kindle version on sale only $2.88)
https://www.amazon.com/Two-Trillion-Dollar-Meltdown-Rollers-ebook/dp/B0097DE7DM/

Does mention regulator under Reagan was major factor in S&L crisis ... but deflects some attention from Reagan ... not mentioning replaced the previous regulator who wouldn't do what his president asked.

some more (old geek) discussion on early NSFNET backbone (operational precursor to modern internet) ... but in a.f.c. newsgroup:
https://www.garlic.com/~lynn/2011b.html#58

--
virtualization experience starting Jan1968, online at home since Mar1970

A Two Way Non-repudiation Contract Exchange Scheme

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Two Way Non-repudiation Contract Exchange Scheme.
Newsgroups: sci.crypt
Date: Sun, 06 Feb 2011 15:44:11 -0500
we were called in to help wordsmith the cal. electronic signature legislation. there seemed to be a lot of lobbying by vendors related to digital signature technology to obfuscate the issues ... possibly hoping to confuse the issue because "human signature" and "digital signature" both contained the word "signature".

the lawyers working on the legislation, very clearly spelled out that non-repudiation is a legal issue, not a technology issue ... and that "human signature" requires some indication of intent, that a human has read, understood, agrees, approves, and/or authorizes (related to appearance of "human signature" as part of legally resolving a dispute).

in terminal point-of-sale ... it has been pointed out that entering the PIN is a form of authentication ... but the screen that asks if you approve the transaction ... and you have to click yes or no ... the "clicking" yes ... is the equivalent of the "human signature" (independent of swiping card & entering the PIN as form of authentcation).

in the 90s, there was some attempt to reduce resistance to deploying "digital signature" technology for payment transactions ... by claiming that digitally signed (aka authenticated) transactions could be considered "non-repudiaton" and therefor could be used to bypass REG-E ... and change the burden of proof in a dispute from the merchant to consumer. however, that still appeared to leave the consumer with the privileged of paying $100/annum for digital signature technology that, if used, would mean that in any dispute ... the burden of proof would be shifted from the merchant to consumer (the consumer looses out all around ... and for the merchant it would be a huge savings, which then would cost justify the merchant deploying the necessary digital signature technology).

trying to get legislation to mandate use of "digital signatures" as the *ONLY* "electronic signature" was part of overcoming consumer resistance to the $100/annum expense (along with obfuscating the issue that "digital signature" should be considered equivalent to "human signature") ... but that still left the issue why a consumer would want to accept the burden of proof in dispute (involving payment transaction).

re:
https://www.garlic.com/~lynn/subpubkey.html#signature

--
virtualization experience starting Jan1968, online at home since Mar1970

VM13025 ... zombie/hung users

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Feb, 2011
Subject: VM13025 ... zombie/hung users
Blog: z/VM
The summer I was at Boeing, I had done pageable kernel support for cp67. While it never shipped in cp67 product (lots of other stuff I had done as undergraduate &/or before joining IBM, pageable kernel support made it out for VM370 ... but never for CP67). recent reference to Boeing:
https://www.garlic.com/~lynn/2010q.html#59 Boeing Plant 2 ... End of an Era

However, I did include pageable kernel support in the internal production CP67 (after joining the science center). Part of the pageable kernel support was creating a "SYSTEM" virtual memory table (for the kernel map of fixed & pageable components) and a "SYSTEM" utable (aka VMBLOK) to go with it.

Later, one of the things Charlie (invented compare&swap instruction while working on fine-grain multiprocessor locking, CAS was chosen because they are Charlie's initials) did was rewrite kernel virtual machine serialization (eliminating lots of zombie users as well as crashes from dangling activity after the virtual machine was gone). Part of the serialization rewrite was to re-assign pending operations to the "SYSTEM" utable (allowing user to complete reset, eliminating lots of zombies w/o having to worry about system crashes from outstanding activity waiting to complete). misc. past posts mentioning SMP and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

As part of converting from cp67 to vm370 ... mentioned in this old email:
https://www.garlic.com/~lynn/2006v.html#email731212 ,
https://www.garlic.com/~lynn/2006w.html#email750102 ,
https://www.garlic.com/~lynn/2006w.html#email750430

some other work on automated benchmarking was also moved ... basically synthetic workload that could be parameterised to match a whole lot of different characteristics. One of the things was "extremely heavy" ... which was possibly ten times the load normally seen in real life ... which would always result in vm370 system crashing.

So as part of eliminating all possible crashes (in the cp67 to vm370 migration) ... included rewriting virtual machine serialization ... using the technique Charlie had used for cp67. Eventually all known cases of system crashes had been eliminated and the final set of automated benchmarks that was part of releasing my Resource Manager (vm370 release 3 plc 9) involved 2000 benchmarks that took three months elapsed time to run. misc. past posts mentioning automated benchmarking
https://www.garlic.com/~lynn/submain.html#benchmark

While 23Jun69 unbundling had started charging for application software, but they made the case that kernel software was still free. In the wake of FS demise (and considered responsible for allowing clone processors to get market foothold), misc. past mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

There was then decision to start charging for kernel software ... and my Resource Manager was selected to be the guinea pig. misc. past posts mentioning resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

misc. past posts mentioning 23Jun69 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

For vm370 release 4, it was decided to release multiprocessor support and a design was chosen based on some efforts that weren't released that I was involved with dependent on lots of stuff in the Resource Manager. The transition guidelines for kernel software charging allowed direct hardware support to still be free (aka multiprocessor support) and it couldn't have dependency on charged for software (aka my Resource Manager). The eventual solution was to take approx. ninety percent of the lines of code in my Resource Manager and move it into the free release 4 base (leaving the release 4 Resource Manager the same price as the release 3 Resource manager, even though only ten percent of the code).

So I believe that for extended period of time (starting with Resource Manager) all cases of zombie users and crashes because of dangling activity (after user was logged off) had been eliminated ... until some APAR to DMKDSP that was VM13025. This APAR re-introduced zombie users by allowing some delay in resetting a virtual machine ... because of poorly thought out analysis involving some system failure.

Date: 02/17/86 09:49:56
From: wheeler
To: melinda

re: hunguser; I've just shot a couple of such conditions via the internal IBMVM forum with people appending DUMPRX output. You might check 13025 test in DMKDSP which appears to be contributing to most of the problems (I haven't tracked it back with change team why test was put in ... I presume it was some sort of unpredictable abend with logoff finishing while something was active for user).

One specific case causing hunguser was virtual machine in TIO-LOOP condition on virtual console ... some sort of error caused DMKGRF to clear busy condition then exiting w/o turning off TIO-LOOP flag and VMEXWAIT. It is likely that there may be a couple other similar situation around CP that weren't causing hangs prior to 13025 (note: CFP/CFQ clears VMEXWAIT, if user does system-reset#logoff ... 13025 won't catch him).

Completion of logoff with outstanding activity should be caught by CFP/CFQ and/or PGSPW (PTRPW). Situations where it won't work are with routines that do SWTCHVMs and have pending activity for another VMBLOK. Do you remember Jerry S. description of delayed spool interrupt for batch machines. I worked on a fix with him for the problem. That situation could also result in delayed activity for a logged-off VMBLOK, ... there is nothing obvious that CFP/CFQ/PGSPW can catch while the "sending" VMBLOK is waiting on the completion of the spooled console page I/O for the "receiving" VMBLOK. There are likely similar situations in VMCF/IUCV processing.


... snip ... top of post, old email index

Date: 02/17/86 11:00:21
From: wheeler
To: melinda

re: 13025; oh yes, there is also a 13025 test in uso ... where if VMEXWAIT is on, it simulates the FORCE command instead off LOGOFF ...


... snip ... top of post, old email index

The reference in the above to DUMPRX was a replacement for IPCS that I had done in REXX ... some past posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

vm/370 3081

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 06 Feb, 2011
Subject: vm/370 3081
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#49 vm/370 3081

Date: 08/20/82 08:39:08
To: wheeler

re: 3081 capacity

As one might expect, the 3081 has problems running ACP. Eastern Airlines installed a 3081D, and found that it ran as much as 20% slower then the 3033 it replaced. Plus, one processor in the 3081 is idle all the time since ACP supports only one CPU. Needless to say, Eastern was pissed. IBM's recommendation to Eastern was to upgrade to a 3081K, which they did. The 3081K runs about 5% faster than the 3033. Of course, one CPU is still idle.

How IBM ever talked Eastern into buying the 3081 in the first place is beyond me.

Because of the problems with the 3081, IBM announced the 9083, which is a hand-picked 3081K with the cycle time down cranked down a couple of nanoseconds. It also has different I/O microcode to balance the CPU/Channel load a bit better than on the 3081.

Although many people acted surprised that the 3081 had problems with ACP, it really was a surprise to no one. Management as high up as xxxxxx knew about this problem years ago. They deliberatley chose to ignore the problem, because they felt there was no way to fix it. They were more or less right. The 9083 is a last ditch attempt to try and squeeze a bit more performance out of the machine.

Instead of ignoring the problem, they should have been working on ways to increase the performance of the channels and make ACP use the second CPU.


... snip ... top of post, old email index

Date: 10/08/82 11:33:26
From: wheeler

re: 3081/3033; Official pub. numbers I've seen says the 3033 is a five mip processor. The 3081d is two five mip processors and the 3081K is two seven mip processors.

More realistic numbers is that 3033 is a 4.5 mip processor. Tests with running a 3081d only as a UP (one processor) has it up to 20% slower than a one processor 3033 ... and a similar test with one 3081k processor has it 5% faster than a one processor 3033. MP numbers will have a 3081d/3081k performing at 1.5 times the UP number (in effective processor cycles). That would make the 3081k just a little more than 1.5 times a 3033up (in terms of CPU).

Other considerations will have a 3081k actually less than 1.5 times a 3033up. A 3033up has up to 16 megs. and 16 channels. A 3081k has up to 16 megs (greater than 16 meg. support is suppose to be available late next spring ... but that has a high exposure since the support still has lots of bugs in it). Effectively the 3081k only has the same memory as a 3033up (not twice ... and VM support may not work for another year). The 3081k does have 24 channels rather than 16 ... which gives it 1.5 times the number of channels. The catch there is that the 3033 has three channel directors (each one a 158 processor) to handle the 16 channels. There is only one IOCP on the 3081 & it isn't much (if any) faster than a single channel director. There are minor changes in the I/O architecture which offloads some function from CPU cycles into the IOCP cycles ... which may be more effective in the future with better hardware implementation.

Therefor the 3081k is about 1.5 times the cpu of a 3033up, same amount of memory, and definitely less than 1.5 times the I/O capacity.


... snip ... top of post, old email index

Date: 10/12/84 13:23:35
From: wheeler

re: sio/diag; just for the heck of it & not too scientific here are some nos. Simple "looping" cms program. SIO does a LPSW to enter wait state. No attempt was made to control other activity going on in the system. The number is the totcpu time taken from the 'q time' command. Program was run with a loop of 2000 and then again with a loop of 12000. The following numbers given are the difference between the time for the 2000 loop run and the 12000 loop run (i.e. time per 10,000 operations):

             diag       sio
4341         9.52      15.28
3081         3.45       7.18

both machines were running hpo so part of 4341-2 ecps is "crippled" or 4341 numbers should have been better (3081 has the latest cpu speed-up EC on it also).

... snip ... top of post, old email index

Date: 10/12/84 15:12:59
From: wheeler

re: sio vs. diag; well, diag is now only 40-50% of what sio is (instead of <20%) ... of course a real CMS running with sio would have more overhead than what was shown because of the requirement for going thru an I/O scheduler and an I/O interrupt processor (instead of the tight loop implemented in the test program).

The biggest hot spot has alwas been DISPATCH (DMKDSP), unfortunately the 43xx people don't appear to care since they've got microcode & the big machine people are tied up being concerned about the single large guest and doing everything possible for its benefit. In the meantime (past 10-14 years), the highly optimized code paths have fallen into disrepair ... they may not break ... but they don't look any where as sleek as they use to.


... snip ... top of post, old email index

old post mentioning original work for endicott/4341 microcode (ECPS):
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

part of the above (aka increasing bloat in diag i/o) contributes to my paged mapped cms filesystem pathlengths looking better. however, there is the fundamental philosiphical architecutre issue ... even at optimal, "diag i/o" is still using (real address) channel program paradigm which requires overhead to remap into a virtual paradigm. paged mapping already takes care of all that before it ever hits the cp interface. misc. past post mentioning having originally done the work on cp67/cms and then ported to vm370
https://www.garlic.com/~lynn/submain.html#mmap

old post mentioning having done the precursor to diag i/o while undergraduate:
https://www.garlic.com/~lynn/99.html#95 Early interupts on mainframes
https://www.garlic.com/~lynn/2000c.html#42 Domainatrix - the final word
https://www.garlic.com/~lynn/2003.html#60 MIDAS
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003k.html#7 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005t.html#8 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2008r.html#33 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2008s.html#56 Computer History Museum
https://www.garlic.com/~lynn/2010l.html#29 zPDT paper

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Mon, 07 Feb 2011 09:31:44 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
That's an interesting observation. Does anyone know why IBM decided to go with CKD diska rather than FBA for the 360? In retrospect it turns out to have been a terrible decision.

one of the things was that a lot of file system structure could be kept on disk (rather than in memory) ... and use search channel programs to find items on disk ... trading off relatively abundant I/O resources against very scarce real storage resource.

by at least the mid-70s, the resource trade-off had inverted and FBA devices were being produced produced ... however the POK (batch) favorite son operating system didn't/wouldn't support them. I was told that even giving them fully tested and integrated FBA support, I would still have to come up with a $26M business case to cover the costs of documentation and education. I wasn't allowed to use life-cycle savings or other items ... but had to be purely based on incremental new disk sales (new gross on the order of $200m-$300m new disk sales). The argument was then that existing customers would just buy the same amount of FBA as they were buying CKD. misc. past posts mentioning FBA, CKD, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

There was a similar/analogous shift in real resources with uptake in RDBMS in the 80s. In the 70s, there was some skirmishes between the '60s IMS physical database group in STL and the RDBMS System/R group in bldg. 28 ... some old posts mentioning original sql/relational
https://www.garlic.com/~lynn/submain.html#systemr

IMS group claiming RDBMS doubled the physical disk requirements for the implicit, on-disk index, index processing also significantly increasing the number of disk i/os. Relational retort was that exposing explicit record pointers in application significantly increased administrative overhead and human effort in manual maintenance. In the 80s, significant drop in disk price/mbyte minimized the additional disk space argument, and further increases in available of real storage allowed caching of index structure ... mitigating the number of physical disk I/Os (while human experience/skills to manage IMS were becoming scarce/expensive).

The early 80s also saw big explosion in mid-range business ... 43xx machines (also saw similar explosion in vax/vms). The high-end disk was CKD 3380 ... but the only mid-range disk was FBA 3370. This made it difficult for the POK favorite son operating system to play in the mid-range with 4341 ... since there wasn't a mid-range CKD product (there was some POK favorite son operating system on 4341 for some installations that upgraded from older 370s to 4341 and retaining existing legacy CKD). Eventually attempting to address the exploding mid-range opportunity (for POK favorite son operating system), the CKD 3375 was produced, which was CKD simulation on FBA 3370.

Of course, today ... all CKD devices are simulated on FBA disks.

misc. recent posts mentioning the $26M business case
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2011.html#35 CKD DASD
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1

recent old email item mentioning 4341 & 3081
https://www.garlic.com/~lynn/2011b.html#email841012

in this (linkedin) discussion about 3081
https://www.garlic.com/~lynn/2011b.html#49 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#62 vm/370 3081

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Mon, 07 Feb 2011 12:30:41 -0500
despen writes:
I think that depends on how much of the details of the architecture are hidden. If you don't know how screen locations are addressed, try this:

http://www.prycroft6.com.au/misc/3270.html

It just goes from bad to worse. 12 bit addressing wasn't enough so they came up with 14 bit addressing.

Graphics? A nightmare you don't want to see. Even hardware wise, IBM engineers must have been trying to cover up how slow graphics were by treating the viewer to a few seconds of scrambled screen for every graphics display.

Of course I use 3270s every day, but look inside. It's a nightmare.


3277/3272 (ANR) had a lot more electronics in the head ... and so the transmission was a lot more efficient.

3278/3274 (DFT) move a lot of the electronics back into the controller ... which resulted in significantly reducing the transmission efficiency.

This showed up in things like signicant response time differences between ANR and DFT (if you worried about such things, when we tried to escalate ... the response was 3278/3274 was NOT designed for interactive computing ... but for data-entry ... i.e. essentially computerized keypunch). An old comparison
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

It later shows up in upload/download throughput with PC terminal emulation ... comparing thruput with a 3277/ANR emulation card vis-a-vis 3278/DFT emulation card.

All 327x was designed with star-wired coax ... with individual coax cable from the datacenter out to every 327x terminal. Some buildings were running into loading limits because of the enormous aggregate weight of all those 327x cables. Token-ring was somewhat then positioned to address the enormous problem with the weight of 327x cables. Run CAT4 out to local MAU box in departmental closet ... then star-wired CAT4 from departmental MAU box to individual terminals. Communication division was having token-ring cards (first 4mbit/sec and later 16mbit/sec) designed with throughput oriented towards "terminal emulation" paradigm and hundreds of such stations sharing common LAN bandwidth. misc. past posts mentioning "terminal emulation"
https://www.garlic.com/~lynn/subnetwork.html#emulation

above posts also periodically mentioning that the hard stance on preserving the "terminal emulation" paradigm (and install base) was starting to isolate the datacenter from the growing distributed computing activity. In the late 80s, a senior disk engineer got a talk scheduled at the internal, world-wide, annual communication group conference ... and opened the talk with the statement that the communication group was going to be responsible for the demise of the disk division. While the "terminal emulation" paradigm help with early uptake of PCs (business could get a PC with 3270 emulation for about the same price as a real 3270 ... and in the same desktop footprint get both business terminal and some local computing, since the 3270 terminals were already business justified, it was no brainer business justification to switch from real teraminal to PC), the rigid stance on preserving "terminal emulation" was started to result in large amounts of data leaking out of the datacenter to more "distributed computing" friendly platforms (the leading edge of this wave was showing up in decline in disk sales).

Because the RS6000 had microchannel ... they were being forced into using cards designed for PS2 market. The workstation group had done their own 4mbit token-ring card for the PC/RT (16bit AT bus). It turns out that 16mbit PS2 microchannel card (designed for terminal emulation market) had lower per card thruput than the PC/RT 4mbit token ring card. RS6000 had similar issues with the PS2 scsi controller cards and the PS2 display adapter cards (none of them designed for high-performance workstation market).

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Mon, 07 Feb 2011 12:49:12 -0500
Roland Hutchinson <my.spamtrap@verizon.net> writes:
i think research shows that all lower case is far easier to read than all upper case. a persistent legend has it that telegrams were printed in upper case rather than lower only because samuel morse could not countenance the thought of printing the word "god" (in reference to the deity of the monothestic faiths) without an uppercase g, and his decision stuck us all in upper case for a century or so.

then along came UNIX(tm) and we started writing _almost_ everything in lower case.


little topic drift ... irate email (done in upper case for emphasis) about the enormous amount of SNA/VTAM misinformation being spewed ... in this case regarding applicability of SNA/VTAM for the NSFNET backbone:
https://www.garlic.com/~lynn/2006w.html#email870109
in this post
https://www.garlic.com/~lynn/2006w.html#26 SNA/VTAM for NSFNET

also included in this linkedin reference:
http://lnkd.in/JRVnGk

similar motivation to the group attempting to preserve the "terminal emulation" paradigm mentioned in this post:
https://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company
and these
https://www.garlic.com/~lynn/subnetwork.html#emulation

also shows up as part of effort to get the corporate internal network converted to SNA/VTAM ... part of the justification telling the executive committee that PROFS was a VTAM application:
https://www.garlic.com/~lynn/2006x.html#email870302
in this post
https://www.garlic.com/~lynn/2006w.html#7 vmshare

and another old email
https://www.garlic.com/~lynn/2011.html#email870306
in this post
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing Plant 2 ... End of an Era

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Feb, 2011
Subject: Boeing Plant 2 ... End of an Era
Blog: Old Geek Registry
re:
http://lnkd.in/RaEX_s

The observation about "cloud computing" has been repeated a number of times regarding the various (virtual machine based) timesharing services that sprung up in the 60s (not just BCS but also NCSS & IDC and later TYMSHARE). In addition, there were loads of "in-house" operations ... including the one referenced here:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

One of the challenges in the 60s for (virtual machine based) commercial timesharing being up 7x24 was the CPU meter. Machines were leased and monthly charges were based on the CPU meter (in blocks of 1st shift, 1+2 shift, 1+2+3 shift & all four shifts, aka 7x/24) which ran whenever the CPU was executing and/or there was active I/O. One of the tricks was to come up with an I/O sequence that would accept incoming characters ... but allow the CPU meter to stop when things were otherwise idle. Another challenge was to support darkroom operation ... being able to leave system up offshift w/o requiring onsite operator.

Early days there was sparce offshift use ... but it was chicken&egg; little offshift used was hard to justify being up 7x/24 ... but w/o 7x24 availability, it was difficult to encourage offshift use (so it was important early on to minimize the offshift operating costs).

past posts mentioning early commercial time-sharing
https://www.garlic.com/~lynn/submain.html#timeshare

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Mon, 07 Feb 2011 15:28:50 -0500
despen writes:
That's good and it does earn some credibility.

I think I did better. After developing a 2260 application my boss handed me the specifications for a 3270 and asked me what I thought of it. I gave him the correct answer, it's a piece of crap.

Subsequent to that I've done a lot of pretty low level 3270 stuff but never changed my opinion. It has no redeeming qualities.


re:
https://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company

A class of (3270) "psuedo devices" were added to vm370 ... and along with PVM (passthrough virtual machine) ... allowed remote terminal emulation ("dial PVM" and then select another network node to "logon" to.

Fairly early ... the person responsible for internal email client (VMSG, which was also used as the embedded technology for PROFS email) wrote PARASITE & STORY ... small compact programmable interface to PVM along with a HLLAPI-like language (predating IBM/PC and 3270 terminal emulation).

old posts with some old PARASITE/STORY information ... along with STORY semantics and example scripts:
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#36 Newbie TOPS-10 7.03 question

including things like logging into the Field Engineering "RETAIN" system and pulling off lists of service support information.

both PARASITE & STORY are remarkable for the function they provided ... considering the size of their implementation.

--
virtualization experience starting Jan1968, online at home since Mar1970

vm/370 3081

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Feb, 2011
Subject: vm/370 3081
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#49 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#62 vm/370 3081

I have no knowledge of VM having such problems (modulo any special case code introduced in vm/sp1 for the ACP/TPF case) ... this is long winded post to (linkedin) z/VM that includes some discussion of original release 4 multiprocessor support. The precursor had been for a 5-way implementation ... and then a 16-way implementation (neither announced). The 16-way had been going great guns ... including co-opt'ing some of the 3033 processor engineers spending spare time on the project. Then somebody informed the head of POK that it might take decades before the POK favorite son operating system had 16-way support ... at which point the head of POK invited some people to never visit POK again ... and that the 3033 processor engineers should focus solely on what they were doing.
https://www.garlic.com/~lynn/2011b.html#61
misc. past posts specifically mentioning 5-way effort
https://www.garlic.com/~lynn/submain.html#bounce
general posts mentioning multiprocessor (&/or compare&swap instruction)
https://www.garlic.com/~lynn/subtopic.html#smp

The 370 strong memory consistency takes significant toll on cache processing. Both standard MVS & VM had an issue with kernel dynamic storage where it was possible for different allocated storage could share a common cache line. If the two different storage areas were in use by different processors concurrently ... there could be significant cache-line thrashing between the different processors. This issue existed on 2-way 3081 ... but probability of it happening increased for 4-way 3084. To address the problem, both MVS and VM had their kernel dynamic storage modified to be cache line sensitive (storage allocation started on cache-line boundary and was allocated in multiples of cache line). This change (kernel dynamic storage) got something like 5-6% performance improvement.

Past post mentioning MVS smp getting extended from 2-way to 4-way (for 3084) as well as well as cache-line sensitivity
https://www.garlic.com/~lynn/2001j.html#18
https://www.garlic.com/~lynn/2003j.html#42
https://www.garlic.com/~lynn/2007.html#44
https://www.garlic.com/~lynn/2008e.html#40

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing Plant 2 ... End of an Era

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 07 Feb, 2011
Subject: Boeing Plant 2 ... End of an Era
Blog: Old Geek Registry
re:
http://lnkd.in/RaEX_s

One of the biggest users of cmsapl (from cambridge science center) ... and then aplcms (from palo alto science center) .. and then various other generations of APL ... was the (virtual machine) online HONE system. ... the US datacenters were consolidated in bldg across the back parking lot from PASC in the mid-70s (and in the late 70s had one of the largest single system operations in the world). HONE systems provide online sales & marketing support ... worldwide as HONE systems were cloned around the world
https://www.garlic.com/~lynn/lhwemail.html#hone
past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

random trivia ... the HONE consolidated datacenter bldg (across the back parking lot from PASC) has a different occupant now ... but may know the occupant of the new bldg that went up next door to it (1601 cal, palo alto, ca.). Some of my earliest overseas business trips was being asked to help with new HONE clones that went in at various locations around the world (including when EMEA hdqtrs moved from US to Paris).

I first met Pete (and couple others) in the early 70s when they came out to Cambridge to add 3330 & 2305 device support to "CP67I" system. The production cambridge system (360/67 CP67L) had "H" updates to provide virtual 370 virtual machines. Then (on top of CP67H) were set of "I" updates which modified CP67 to run on 370 architecture (instead of 360/67). The cp67i system was operational in 370 virtual machines a year before there was any real 370 hardware supporting virtual memory (in fact, booting the cp67i system was one of the early tests for real 370 virtual memory hardware). Internally for a long time, large number of 370 machines were running cp67i (sometimes called "cp67sj" with the added device support for 3330 & 2305).

--
virtualization experience starting Jan1968, online at home since Mar1970

vm/370 3081

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Feb, 2011
Subject: vm/370 3081
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011b.html#49 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#62 vm/370 3081
https://www.garlic.com/~lynn/2011b.html#68 vm/370 3081

additional 3084 ... just straight storage alterations resulting in x-cache chatter ... not even the scenario where two different storage uses overlap in the same cache-line ... resulting in cache-line thrashing between different machines. There is also issue that the 308x channel processor is even slower than the 303x channel director (aka 158 microcode engine with just the 158 integrated channel microcode and no 370 microcode).

Date: 09/17/82 10:40:29
From: wheeler

<... snip ...>

Performance numbers for the 3084 seem to have some liberties. 4-way should have three times the performance interferance that a 2-way (cache invalidation signals from 3 other processors instead of one). They cheat with the 3083 versis 3081. for example, on a 158ap, running a UP generated system ... the processor runs 10% slower if the switch on the machine is in AP-mode rather than UP-mode (additional delay in each machine cycle just to listen for cache invalidation signals from the other processor ... this is w/o the other processor even executing anything generating storage alterations & cache invalidation signals). I've heard that the 3084 numbers are somewhat selected benchmarks that do minimal storage alterations ... extensive storage alteration programs can have disastrous effects on 3084 performance. ... I've been told that almost every control unit that has attached to a 308x has had to undergo hardware ECs ... apparently it was easier for every control unit hardware group in the company (even on machines no longer with development group people available) to resolve the problems than for the 308x channels. Also did you see the message that ACP runs 20% slower on a 3081d than on a 3033. On a 3081k, ACP runs 5% faster than a 3033. POK is started a special 3081k CPU program where the 3081s coming down the line will be tested to see if they can run with their clock cranked down. If they pass, they will be special high performance 3081Ks which run slightly faster than normal 3081ks.


... snip ... top of post, old email index

aka "overclocking" by any other name.

following was extremely long discussion and has been heavily snipped. one of the issues was 3081 "pageable" microcode having impact on system throughput (mentioned in the SIE instruction implementation)
https://www.garlic.com/~lynn/2011.html#62 SIE - CompArch

Date: 2/10/81 14:05
To: wheeler
...
We have a new problem in cache management. The 3081 has cache lines containing pageable microcode: about 1000 words (108 bits) of non-pageable and about 1000 words of pageable. Unfortunately, it appears that these 1000 words of pageable microcode are not enough to hold both the VM and the MVS microcode for the new goodies present in 811 architecture. I'm hoping our analysis was wrong and that is not what we are looking at, but it indeed appears to be. It may be a few years yet before we see an 811 suitable for use by VM.

Even if we have enough microcode cache, there is a small cache size for other work in comparison to the storage size. It was designed with 16M in mind and has been 'upgraded' to 64M without increasing the cache size (brilliant move). There is thought of requesting architecture to define an instruction to cast out all altered cache lines from the store-through cache. This would place cache management under more direct control from the scheduler.

...

While the pageable microcode cache seems to be inadequate for both VM/370 and VM/811 (VM/370 was hoping to catch VM/PE, but VMA+MVSA is too tight a fit), it is much worse for VM/811 than for VM/370. Management is still talking about 12/82 delivery dates (which they in fact made impossible when they decided to re-convert all control blocks, module names and linkage conventions last summer). I live in a mad-house.

...


... snip ... top of post, old email index

... oh, with regard to "811" ... this was code name from 370/xa architecture documents carrying date of nov1978.

for all I know, there could be possibility that "vmtool" had 2-way dependency. in the wake of the demise of Future System ... there was mad rush to get products back into 370 product pipeline (370 work had been pretty much been killed off during the FS period).
https://www.garlic.com/~lynn/submain.html#futuresys

Part of this was the 3033 (using 168 wiring diagram remapped to faster chips) ... in parallel with 370/xa effort. POK had managed to convince the corporation that vm370 should be killed, the 370 (burlington mall) development group shutdown, and all the people moved to POK to support MVS/XA development (otherwise MVS/XA wouldn't be able to meet its ship schedule). Part of that was an internal only virtual machine tool (never intended to be shipped to customers) supporting MVS/XA development.

Endicott managed to save the vm370 product mission, but had to reconstitute a development group from scratch.

The vmtool work went on independent of later vm370 work ... including the adding of multiprocessor support. As a result, the vmtool may have chosen a completely different SMP implementation. Later when decision was made to make limited ship of the vmtool (for a vm/xa) ... it could have required rework from a 2-way specific design to a 4-way design (in order to support 3084).

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Tue, 08 Feb 2011 13:35:39 -0500
Michael Wojcik <mwojcik@newsguy.com> writes:
I had an IBM RT/PC with the "Megapixel" display, which used one of those cables that was just a bundle of three individual cables with BNC connectors, for RGB. Sync was carried on one (G?), but you could swap the other two without ill effect, except that you'd be swapping the two color signals.

I had RT/PC with megapixel display in non-IBM booth at Interop '88 ... it was at right angles to the SUN booth which had/included Case doing SNMP; Case was talked into coming over and getting SNMP up and running on the RT. misc. past posts mentioning Interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

At the time of Interop '88, for lots of people, it wasn't clear that SNMP was going to be the winner. also at Interop '88, vendors had extensive GOSIP/OSI stuff in their booths (fed. gov. was mandating internet would be killed off and be replaced by OSI stuff).

misc. past posts in this thread:
https://www.garlic.com/~lynn/2011b.html#57 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#63 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 08 Feb, 2011
Subject: IBM Future System
Blog: IBM Historic Computing
re:
https://www.garlic.com/~lynn/2011.html#14 IBM Future System
https://www.garlic.com/~lynn/2011.html#18 IBM Future System
https://www.garlic.com/~lynn/2011.html#20 IBM Future System

from long ago and far way .... cp67 "G" updates ... base virtual 370 was cp67H updates, cp67 running on 370 (rather than 360/67) was cp67I updates, 3330&2305 device support was cp67sj updates. System/R was original sql/relational implementation

Date: 01/17/80 08:12:39
From: wheeler

There are/were three different projects.

A long time ago and far away, some people from YKT come up to CSC and we worked with them on the 'G' updates for CP/67 (running on a 67) which was to provide virtual 4-way 370 support. Included as part of that was an additional 'CPU' (5-way) for debug purposes which was a super set of the other four and could selectively request from CP that events from the 4-way complex be sent to the super CPU for handling (basically it was going to be a debug facility). All of this was going to be for supporting the design & debug of a new operating system. When FS started to get into trouble the YKT project was canceled (something like 50 people at its peak) and the group was moved over to work on FS in hopes that they might bail it out.

2) One or two people from that project escaped and got to SJR. Here, primarily Vera Watson, wrote R/W shared segment support for VM (turns out it was a simple subset of the 'G' updates) in support of System R (relational data base language). Those updates are up and running and somewhat distributed (a couple customers via joint studies and lots of internal sites).

3) Independent of that work, another group at SJR has a 145 running a modified VM with extremely enhanced modifed timer event support. The objective is to allow somebody to specify what speed CPU and how fast the DASD are for a virtual machine operator test. VM calculates how fast the real CPU & DASD are and 'adjust' the relative occurance of all events so that I/O & timer interrupts occur after the appropriate number of instructions have been executed virtually. These changes do not provide multiple CPU support


... snip ... top of post, old email index

past mention of cp67g
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2007f.html#10 Beyond multicore

recent mention of virtual 370 on cp67
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era

--
virtualization experience starting Jan1968, online at home since Mar1970

Custom programmability for 3270 emulators

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Custom programmability for 3270 emulators
Newsgroups: bit.listserv.ibm-main
Date: 8 Feb 2011 15:22:51 -0800
charlesm@MCN.ORG (Charles Mills) writes:
Last one that I wrote was in about 1990. Anyone remember SNA? <g>

There are several 3270 vendors around. Some of the emulators have a macro capability.


internal parasite/story predated ibm/pc and relied on vm370 psuedo device and passthru virtual machine (do remote 3270 emulation over the internal network). old posts with description and some example stories ... including automated login to FE Retain system and retrieve PUT buckets:
https://www.garlic.com/~lynn/2001k.html#35
https://www.garlic.com/~lynn/2001k.html#36

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Wed, 09 Feb 2011 08:43:20 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Exactly! When CGI first came out I read the description and thought "this is nothing new, CICS programmers have been doing this since the nineteen-sixties."

univ. library got a ONR grant to do online catalog, part of the money went to 2321 datacell and the effort was also selected to be one of the beta test sites for CICS product ... and I got tasked to support/debug CICS effort.

somewhat similar recent post in (linkedin) MainframeZone group
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???

recent post in ibm 2321 (data cell) thread in (linkedin) IBM Historic Computing group
https://www.garlic.com/~lynn/2010q.html#67

yelavich cics history pages from way back machine
https://web.archive.org/web/20050409124902/www.yelavich.com/cicshist.htm

other yelavich cics history
https://web.archive.org/web/20040705000349/http://www.yelavich.com/history/ev198001.htm
and
https://web.archive.org/web/20041023110006/http://www.yelavich.com/history/ev200402.htm

past posts mentioning CICS (&/or BDAM)
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Thu, 10 Feb 2011 08:36:06 -0500
hancock4 writes:
1401 emulation also required microcode in the S/360 to work. Originally they tried doing it merely via software, but that ran too slow.

I believe emulation was extremely popular so it was virtually a standard feature. It was critical for IBM to provide so as to provide an upward path for 1401 and 7090 users to go to S/360.

Many shops retained old programs into the 1990s. Who knows, maybe some is still running today though I suspect Y2k needs killed off whatever was left.


univ. had a 407 plug-board application which went thru some stages and eventual was running as 360 cobol program ... still simulating the 407 plug-board ... including printing out 407 settings at the end-of-the run. this was run every day for the administration (on os/360 on 360/67 running in 360/65 mode). one day one of the operators noted that the ending 407 settings were different and they stopped all processing. They spent next couple hrs trying to find somebody in the administration that knew what should be done (while all other processing was suspended). Finally the decision was made to rerun the application (and see if the same results came out) ... and then resumed normal processing.

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Thu, 10 Feb 2011 09:25:20 -0500
hancock4 writes:
I don't understand why it was a terrible decision. Isn't CKD more efficient since the record stored on the disk matched the needs of the application?

From the perspective of a COBOL programmer, the particular model of disk was irrelevant. We had ISAM and later VSAM for random access files.


re:
https://www.garlic.com/~lynn/2011b.html#63 If IBM Hadn't Bet the Company

specialized formating with keys and data ... and then search operations that would scan for specific keys &/or data ... minimizing filesystem structure in real memory. multi-track search could scan every track at same arm location (cylinder) ... but to continue would have to be restarted by software. major os/360 disk structure was VTOC (volume/disk table of contents) ... which used multi-track search. The other was library files, PDS (partitioned data set) that was used for most executable software (as well as other items) ... which had PDS "directory" ... that was also searched with multi-track search.

As part of scarce real-storage the disk i/o search argument was in real processor storage ... and was refetched by search command for every key/data encountered for the compare operation (requiring end-to-end access from disk to processor storage).

The all-time winner and complicated in this was ISAM channel programs which could do a search, read the data ... reposition the arm ... and use the previously read data for later search argument (potentially repeated several times, all in single channel program w/o interrupting the processor)

As I mentioned the trade-off started to shift by at least the mid-70s ... with real storage resources significantly increasing ... while I/O & disk thruput improvements was starting to lag significantly (becoming the major bottleneck; other infrastructures was increasingly leverage real storage to compensate for the growing disk bottleneck).

In the late 70s, I was brought into datacenter for large national retailer that had large number of loosely-coupled processors running latest os/360 in a "shared disk" environment. They were having a really horrible throughput problem and several experts from the corporation had already been brought in to examine the problem.

I was brought into classroom that had several tables covered with foot-high stacks of printed output ... all performance activity reports for all the various systems (including each system individual disk i/o counts at several minute interval). After 30-40 minutes of fanning all the reports ... I asked about one specific disk ... that under peak-load ... the aggregate sum of the disk i/o counts across all systems seemed to peg at six or seven (very low correlation of peak versus non-peak for any other item) ... was just rough measure since I was doing the aggregation across all systems and correlation in my head.

Turns out the disk contained the shared application program (PDS) library for all the regions. It had large number of applications and the size of the PDS directory was three (3330) cylinders. Every program load (across all systems) first required a multi-track search of the PDS directory ... on avg was a 1.5 cylinder search. 3330 spinning a 3600 RPM ... the first multi-track search i/o took full 19 revolutions (or 19/60th of a second) with a 2nd multi-track search of 9.5 revolutions (9.5/60 of a second) ... before getting the actual location of the PDS member application program ... movement and load maybe 2/60 of a second. In aggregate, each program load was taking 3 disk I/Os and approx. 29.5/60 of a second .... or the whole infrastructure across all processor for all the national regions for all retail stores ... was limited to loading two applications per second.

Now normally disk infrastructure has multiple disks sharing a common controller and a common channel. Because of the dependency to repeatedly reload the search argument from processor memory ... a multi-track search locks up the controller and channel for the duration of the operation (not available for any other operations) ... severely degrading all other operations.

So solution ... was replicate the program library ... one for processor (instead a shared/common for all processors) ... requiring lots for manual effort to keep them all in sync. The replicated program library was also split ... with more application logic to figure out which of the libraries contained specific application (with dedicated set of program libraries per system).

Similar, but different story about pervasive use of multi-track by os/360 (and its descendants) was san jose research for a period had a 370/168 running MVS (replacing 370/195 that had run MVT) in shared disk configuration with 370/158 running VM. Even tho, VM used CKD disks, its I/O paradigm had always been FBA (and was trivial to support real FBA disks). SJR had rule that while the disks & controller were physically shared, they were logically partitioned so I/O activity from the two different systems wouldn't interfere.

One day an operator accidentally mounted a MVS 3330 pack on a VM "string" disk controller. Within 10 minutes, operations were getting irate calls from VM users about severe degraded performance. Normal multi-track search by MVS to its pack ... was causing extended lock-out of VM access to other disks on the same string/controller). MVS operations refused to interrupt the application and move the 3330 to an MVS string/controller.

A few individuals then took a pack for a VS1 system that had been highly optimized for operation under VM ... got the pack mounted on a MVS string/controller and brought up VS1 undere VM (on 370/158) ... and started running their own multi-track searches. This managed to nearly bring the MVS system to a halt (drastically reducing MVS activity involving the 3330 on the VM string ... and significantly improve response for the VM users that were accessing data on that string). MVS operations decided that they would immediately move the MVS 3330 (instead of waiting to off-shift) ... if the VM people would halt the VS1 activity.

One of the jokes is that a large factor contributing to horrible TSO response under MVS (TSO users don't normally even realize how horrible it is ... unless they have seen VM response for comparison) ... isn't just the scheduler and other MVS operational characteristics ... but also the enormous delays imposed by multi-track search paradigm.

misc. past posts discussing CKD, multi-track search and FBA
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Thu, 10 Feb 2011 09:55:53 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
I'm not sure of that. From IBM's POV that's more resources they have to devote to software that probably has a small profit margin.

From the user's POV, obviously a low-end customer couldn't afford to run an OS that eats 75% or more of his system. I believe these systems could have run PCP, but that was single-tasking and much less capable than DOS.

On the other hand I've long been disappointed (since the 70's) that IBM didn't make more of a move to unify the two systems. I understand from the stuff Lynn posts that probably no one could make a business case for it, but for users more compatibility would have been a good thing. It defeats the benefits of a unified architecture when you have to at least recompile all your programs and recode all your JCL when moving from DOS to OS.


The POK favorite son operating system repeatedly tried ... customers kept getting in the way.

the software has small profit margin (in the 60s, it use to be free) ... however, user being able to run an application program was the justification for having the hardware. Amdahl gave a talk in the early 70s at MIT about founding his clone processor company. He was asked how he convinced the venture/money people to fund the company. He said that customers had already invested several hundred billion in 360 application software development ... which even if IBM were to totally walk away from 360 (I claim could be a vieled reference to Future System), that was enough to keep him in business through the end of the century.

DOS to OS conversion was an incompatibility inhibitor ... but not as great as totally different system. Part of the issue was perception ... part of the issue was the degree of effort to move from DOS to OS (compared to radically different systems and hardware) ... aka possibly reduces the benefit of unified architecture ... but didn't totally defeat the benefit of unified architecture.

Low-end (actually any) customers could run an OS that eats 75% or more of his system ... if the aggregate cost of the system is less than the cost of not running the application (net benefit) ... and they didn't perceive a viable alternative. In early days it was completely different world ... cost of application development could dominate all other costs ... and opportunity costs ... having the automation versus doing it manually ... covered a lot of sins ... especially with high value operations like financial operations (financial institutions and/or financial operations of large corporations).

Compatible architecture provided perception that the several hundred billion in (customer) software application development didn't have to scrapped and start all over every generation. This was the testimony in gov. litigation by RCA(?) that all the vendors had realized by the late 50s the requirement for single compatible line ... and only IBM actually pulled it off (corollary is that with the perception of being the only one that pulled it off, a lot of other things could be done wrong ... and still be able to dominate the market).
https://www.garlic.com/~lynn/2011b.html#57 If IBM Hadn't Bet the Company

POK favorite son operating system ... after FS failure and mad rush to get product (hardware & software) back into the 370 line ... managed to convince corporate to kill off vm370 (because POK needed all the vm370 developers in order to meet the MVS/XA delivery date). Endicott eventually managed to save the vm370 product .. but had to reconstitute a development group from scratch. misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

some of the 70s confusion was that FS was planned to completely replace all 370 products (hardware and operating system) ... the aftermath of FS demise left the company in disarray and scrambling to recover (there have been comments if something the magnitude of FS had been attempted by any other vendor ... the resulting massive failure would have resulted in the vendor no longer being in business)

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Thu, 10 Feb 2011 14:56:41 -0500
re:
https://www.garlic.com/~lynn/2011b.html#77

company did have a large variety of other processors (besides 360/370) ... 1800, s/3, s/32, s/34, s/36, s/38, series/1, system/7, 8100, etc. in addition there was large variety of embedded processors in controller, devices, and the native engines for the low-end 370. old email (joke) about MIT lisp machine project asking for 801/risc processor and being offered 8100 instead:
https://www.garlic.com/~lynn/2003e.html#email790711

370 compatibility was already giving up a lot in performance, low&mid range emulators typically ran ratio of 10:1 native instructions to 370; 300kip 370/145 needed 3mip native engine, 80 kip 370/115 needed nearly 1mip native engine.

the 135/145 follow-on (370 138/148) had additional microcode storage ... plan was to 1) add additional (virtual machine) 370 privilege instructions to be executed directly to virtual machine rules (rather than interrupt into kernel and be simulated ... aka "VMA" originally done for 370/158) and 2) "ECPS" which placed part of the cp kernel directly in microcode. There was 6000 bytes of extra microcode for "ECPS" and 370->native translated nearly byte-for-byte. So the first effort was instrument the vm370 kernel and identify the highest used kernel instruction pathlengths ... and cut-off when 6000 bytes was reached (which accounted for just under of 80% of kernel overhead). The process involved invent a new instruction and adding it to the vm370 kernel in front of the instruction sequence it was replacing (with a pointer to the next following instruction where it would resume). At startup, there was a test if the appropriate microcode was available ... and if not, overlay all the "ECPS" instructions with "no-ops". Old post with result of the vm370 kernel instrumentation that selected the kernel paths that would become ECPS (part of work I did for Endicott spring of 1975):
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

That 80% of the kernel pathlength (ECPS) then ran ten times faster (implemented in native engine) ... 370/148 at over 500kips ... ECPS then ran over 5mips (directly on the native engine).

Now, somewhat as result of complexity of Future System ... I've claimed that the work on 801/risc went to the opposite for extremely simple hardware. There was an advanced technology conference held in POK in the mid-70s ... where we presented 16-way 370 multiprocessor and the 801 group presented risc, CP.r and PL.8. End of 70s, a 801/risc effort was kicked off to replace the large variety of internal microprocessors with 801/risc (eliminating huge duplication in chip designs along with native engine software & application development). Low&mid range 370s would all converge to 801/risc as native microprocessor (4341 follow-on, the 4381 was suppose to be 801 native; s/38 follow-on, the as/400 was suppose to be 801 native, OPD displaywriter follow-on was to be 801 native, large number of microprocessors for controllers and other embedded devices would all be 801). For various reasons, these efforts all floundered and were aborted.

In spring 1982, I held internal corporate advance technology conference (first since the earlier one held in POK, comments were that in the wake of Future System demise ... most advance technology efforts were all eventually scavanged to resume 370 activity and push 370 hardware and software out the door as quickly as possible).
https://www.garlic.com/~lynn/96.html#4a

I wanted to do a project that reworked vm/cms with lots of new function ... implemented in higher level language ... to minimize it being able to port to different machine architectures. In theory, this would allow also being able to retarget a 370 kernel to some native engine (possibly 801) while still providing both native virtual machines and at the same time 370 virtual machines (best of ECPS combined with being able to port to other processors) ... slight analogy might be Apple with CMU MACH and power/pc. misc. old email mentioning 801, risc, iliad, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/lhwemail.html#801

note there had been big scavenging of advance technology towards the end of FS (attempting to save the effort) ... reference in this old email
https://www.garlic.com/~lynn/2011b.html#email800117
in this (linkedin) Future System post
https://www.garlic.com/~lynn/2011b.html#72

but then (after demise of FS) the mad rush to try and get stuff back into the 370 product pipelines ... they sucked nearly all remaining advanced technology resources into near term product production.

Other interests got involved (in SCP redo) and eventually the effort was redirected to a stripped down kernel that could be common across all the 370 operating systems. The prototype/pilot was to take the stripped down tss/370 kernel ... which had been done for special effort for AT&T ... they wanted to scaffold unix user interface on top of the stripped down tss/370 kernel. The justification was with four 370 operating systems ... there was enormous duplication effort required by all the device product houses ... having four times the cost for device drivers and RAS support in the four different operating systems (DOS, VS1, VM, and MVS). This became strategic corporate activity (common low-level code for all four operating systems) ... huge numbers of people assigned to it ... and eventually had its own FS-demise moment ... collapsing under its own weight. recent post on the subject
https://www.garlic.com/~lynn/2011.html#20
older reference about some of the TSS/370 analysis (for above)
https://www.garlic.com/~lynn/2001m.html#53

with all that going on ... by the mid-80s ... there were starting to be single chip 370 implementations ... as well as various other/faster 801/risc chips ... so going in slightly different direction was large clusters of processors (in rack mount configurations):
https://www.garlic.com/~lynn/2004m.html#17
also mentioned in this recent post
https://www.garlic.com/~lynn/2011b.html#48

and this collections of old email about NSFNET backbone ... I had to find a fillin for presentation to head of NSF on HSDT ... because I needed to do a week in YKT on the cluster processor stuff
http://lnkd.in/JRVnGk

other old NSFNET backbone related email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

and misc. old HSDT related email
https://www.garlic.com/~lynn/lhwemail.html#hsdt

the above cluster processor effort somewhat stalled ... but resumed with medusa (cluster in a rack) ... some old email
https://www.garlic.com/~lynn/lhwemail.html#medusa

old referenced in this old post about jan92 meeting in Ellison's conference room on cluster scale-up
https://www.garlic.com/~lynn/95.html#13

as part of ha/cmp product some old posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

for other drift ... within a month of the meeting in Ellison's conference room, the cluster scale-up effort had been transferred, announced as supercomputer and we were told we couldn't work on anything with more than four processors. This contributed to decision to leave not long afterwards.

now a year or so later, two of the other people that were in that jan92 meeting ... had also left (oracle) and show up at a small client/server startup responsible for something called the "commerce server" ... the startup had also invented this technology they called "SSL". We were brought in to consult because they wanted to do payment transaction on the server; the result is now frequently called "electronic commerce".

--
virtualization experience starting Jan1968, online at home since Mar1970

NASA proves once again that, for it, the impossible is not even difficult

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NASA proves once again that, for it, the impossible is not even difficult.
Newsgroups: comp.arch, comp.arch.embedded
Date: Thu, 10 Feb 2011 15:10:31 -0500
Robert Myers <rbmyersusa@gmail.com> writes:
Similarly, I still don't understand how anyone would fail to recognize that the John Hancock building in Boston is practically a laboratory model for an airfoil with a design that would induce leading edge separation and big unsteady lifting forces as a result. Thus, I don't understand why the panes of glass crashing to the ground came as such a big surprise.

same architect in approx. same period also had bldg. across the charles on MIT campus that also had problem with windows poping out. the "fix" was revolving doors on the ground floor (to minimize air pressure differences with opening ground floor doors).

there was a parody of the shuttle disaster with the booster rockets and the "o-rings" ... while there was a lot of attention payed to the operational characteristics of the "o-rings" ... the parody was the only reason that o-rings were required at all ... was because congress mandated that the booster rockets to be built near the rockies ... and then transported to the cape (requiring them to be in sections for transportation; resulting in the o-rings when they were assembled).

the parody was that somebody in the queen's court convinced her that columbus's ships had to be built in the mountains (where the trees were), then sawed into three pieces for transportation to the harbor ... and glued back together and then launched (as opposed to transporting the trees from the mountains to the harbor for construction of the ships). enormous resources then were focused on technology of gluing a ship back together after it had been sawed into three pieces (as opposed to deciding that ships could be built in the harbor and avoid having to saw them into pieces at all).

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.usage.english, alt.folklore.computers
Date: Thu, 10 Feb 2011 19:28:32 -0500
Michael Wojcik <mwojcik@newsguy.com> writes:
Centralized computing of some sort, certainly. It's clearly not a reference to personal computers.

there were some number of virtual machine based, commercial, online, timesharing that sprung up in the 60s ... using some derivative of cp67.

the virtual machine part provided for a kind of personal computers/computing.

I've made some number of references to it being the earlier generation of cloud computing.

misc. past posts referring to virtual machine based, commercial, online timesharing
https://www.garlic.com/~lynn/submain.html#timeshare

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Thu, 10 Feb 2011 21:40:37 -0500
"Joe Morris" <j.c.morris@verizon.net> writes:
And on the issue of just rerunning the job...that's not *always* unreasonable. Recall the bug that Melinda Varian discovered in the VM dispatch code for a multiprocessor system, where the system forgot to load the FPRs on, IIRC, a fast redispatch.

I first did "fastpath" (including fast redispath) in the 60s at the univ, for (uniprocessor) cp67. it is included in the pathlength work mentioned in this 1968 presentation at share
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

and was picked up and shipped in the standard cp67 product.

there was lots of simplification in the morph from cp67 to vm370 ... and all the fastpath code was dropped. Even tho i still hadn't done port of cambridge system changes from cp67 to vm370 ... reference here in this old email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

i did give the development group the "fastpath" changes that (i believe) were shipped in vm370 release 1, plc9.

justification for not restoring floating point registers in "fast redispatch" (interrupt into the kernel and resuming execution of the same virtual machine) was that cp kernel never used floating point registers (so they should have been unchanged during kernel execution).

multiprocessor support required getting smarter about "fast redispatch" (was the same virtual machine being redispatched on the same processor with contents of the real floating point registers still current for that virtual machine).

various fiddling in dispatch can result in all sorts of problems ... here is recent (linkedin) post about fix "vm13025" ...
https://www.garlic.com/~lynn/2011b.html#61

including in above, old email with melinda:
https://www.garlic.com/~lynn/2011b.html#email860217
https://www.garlic.com/~lynn/2011b.html#email860217b

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2011 00:50:19 -0500
hancock4 writes:
In the 1980s we got an IBM 6670 as a cheap remote printer (so we were told) and that had beautiful print quality.

6670 was basically an ibm copier/3 ... with computer interface (and able to print both sides) ... that printed fonts/characters. i've mentioned before about copier/3 having problems with paper jams (vis-a-vis competition) ... and somebody got bright idea to have tv advertisement highlighted how easy it was to clear copier/3 paper jams ... which backfired because it reminded people how they hated paper jams.

sjr did a number of enhancements ... one was to RSCS to print random quotations on separator page (from the alternate paper drawer ... which was typically filled with colored paper (to make print job seperation easier).

another enhancement was all-points-addressable ... aka apa/sherpa. old email mentioning sherpa
https://www.garlic.com/~lynn/2006p.html#email820304
in this post
https://www.garlic.com/~lynn/2006p.html#44 Materiel and graft

misc. other past posts mentioning apa/sherpa
https://www.garlic.com/~lynn/2005f.html#48 1403 printers
https://www.garlic.com/~lynn/2006p.html#49 Materiel and graft
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2007g.html#27 The Complete April Fools' Day RFCs
https://www.garlic.com/~lynn/2007u.html#72 Parse/Template Function
https://www.garlic.com/~lynn/2008d.html#51 It has been a long time since Ihave seen a printer
https://www.garlic.com/~lynn/2008o.html#68 Blinkenlights
https://www.garlic.com/~lynn/2008o.html#69 Blinkenlights
https://www.garlic.com/~lynn/2010c.html#74 Apple iPad -- this merges with folklore
https://www.garlic.com/~lynn/2010e.html#43 Boyd's Briefings
https://www.garlic.com/~lynn/2010h.html#59 IBM 029 service manual
https://www.garlic.com/~lynn/2010k.html#49 GML
https://www.garlic.com/~lynn/2011.html#1 Is email dead? What do you think?

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2011 09:43:57 -0500
"Esra Sdrawkcab" <admin@127.0.0.1> writes:
This was considered the "future" at my PPOE in around 85 - PC f/e with code editting local compiling and unit test.

PASF was IIRC a PC f/e to PROFS


i've mentioned before that in the early '80s, there was folklore that the (executive branch) PROFS backups contributed to Ollie N. problems.

There was (internal) email client called VMSG ... an extremely early 0.0 something pre-release version was used for core email in PROFS ... but they claimed to have done it themselves. Later when the author offered to provide them with the latest version with lots of upgrade ... PROFS attempted to get him fired. He managed to show that every PROFS email in the world carried his initials in a non-displayed field. After that he limited source distribution to only two other people (me and one other).

recent reference to the SNA/VTAM group had told members of the executive committee that one of the reasons that the internal network had to be converted to SNA/VTAM was because PROFS was a VTAM application
https://www.garlic.com/~lynn/2011b.html#65 If IBM Hadn't Bet the Company

another reference to SNA/VTAM group working hard to get internal network converted to SNA/VTAM
https://www.garlic.com/~lynn/2011.html#email870306
in this post
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?

then reference to various internal factions (including SNA/VTAM) giving me lots of problems working with NSF on the NSFNET backbone
http://lnkd.in/JRVnGk

including this old email reference (somebody had put together large collection of emails with lots of misinformation trying to position SNA/VTAM as solution for NSFNET backbone):
https://www.garlic.com/~lynn/2006w.html#email870109
in this posts
https://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET

recent reference to author of VMSG also wrote PARASITE/STORY:
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company

misc. other past posts mentioning PROFS (&/or VMSG):
https://www.garlic.com/~lynn/99.html#35 why is there an "@" key?
https://www.garlic.com/~lynn/2000c.html#46 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2001j.html#35 Military Interest in Supercomputer AI
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#39 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#40 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2001k.html#56 E-mail 30 years old this autumn
https://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles)
https://www.garlic.com/~lynn/2002h.html#58 history of CMS
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002h.html#64 history of CMS
https://www.garlic.com/~lynn/2002i.html#50 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#4 HONE, ****, misc
https://www.garlic.com/~lynn/2002p.html#34 VSE (Was: Re: Refusal to change was Re: LE and COBOL)
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2003e.html#69 Gartner Office Information Systems 6/2/89
https://www.garlic.com/~lynn/2003j.html#56 Goodbye PROFS
https://www.garlic.com/~lynn/2003m.html#26 Microsoft Internet Patch
https://www.garlic.com/~lynn/2004j.html#33 A quote from Crypto-Gram
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2005t.html#43 FULIST
https://www.garlic.com/~lynn/2005t.html#44 FULIST
https://www.garlic.com/~lynn/2006d.html#10 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006q.html#4 Another BIG Mainframe Bites the Dust
https://www.garlic.com/~lynn/2006t.html#42 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2006x.html#7 vmshare
https://www.garlic.com/~lynn/2007b.html#14 Just another example of mainframe costs
https://www.garlic.com/~lynn/2007b.html#31 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007b.html#32 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007e.html#4 The Genealogy of the IBM PC
https://www.garlic.com/~lynn/2007f.html#13 Why is switch to DSL so traumatic?
https://www.garlic.com/~lynn/2007j.html#50 Using rexx to send an email
https://www.garlic.com/~lynn/2007p.html#29 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007v.html#54 An old fashioned Christmas
https://www.garlic.com/~lynn/2007v.html#55 An old fashioned Christmas
https://www.garlic.com/~lynn/2007v.html#63 An old fashioned Christmas
https://www.garlic.com/~lynn/2008.html#69 Rotary phones
https://www.garlic.com/~lynn/2008.html#75 Rotary phones
https://www.garlic.com/~lynn/2008h.html#46 Whitehouse Emails Were Lost Due to "Upgrade"
https://www.garlic.com/~lynn/2008k.html#59 Happy 20th Birthday, AS/400
https://www.garlic.com/~lynn/2009.html#8 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009.html#23 NPR Asks: Will Cloud Computing Work in the White House?
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2009k.html#16 Mainframe hacking
https://www.garlic.com/~lynn/2009l.html#41 another item related to ASCII vs. EBCDIC
https://www.garlic.com/~lynn/2009m.html#34 IBM Poughkeepsie?
https://www.garlic.com/~lynn/2009o.html#33 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2009o.html#38 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2009q.html#43 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009q.html#49 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009q.html#51 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2009q.html#66 spool file data
https://www.garlic.com/~lynn/2010.html#1 DEC-10 SOS Editor Intra-Line Editing
https://www.garlic.com/~lynn/2010b.html#8 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control
https://www.garlic.com/~lynn/2010b.html#87 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010b.html#96 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010b.html#97 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010c.html#88 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#61 LPARs: More or Less?
https://www.garlic.com/~lynn/2010o.html#4 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011.html#4 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011b.html#10 Rare Apple I computer sells for $216,000 in London

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2011 10:32:04 -0500
Roland Hutchinson <my.spamtrap@verizon.net> writes:
By the late 70s, Xerox was _building_ laser printers -- to go with the Alto, but I don't think they were _selling_ them until the Star workstation was put on the market (1981, says Wikipedia).

3800 was massive datacenter (laser) printer ... replacement for (impact) 1403 & 3211.
http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV3103.html

from above:
The IBM 3800 laser-electrophotographic printer of 1975 had a speed of 20,000 lines a minute in preparing bank statements, premium notices and other high-volume documents. Laser beam paths were altered millions of times a second and were reflected from an 18-sided mirror that spun at 12,000 revolutions per minute.

... snip ...

laser printer wiki
https://en.wikipedia.org/wiki/Laser_printer

re:
https://www.garlic.com/~lynn/2011b.html#82 If IBM Hadn't Bet the Company

google usenet from 1985 mentioning XEROX laser printer emulating 6670.
http://groups.google.com/group/fa.laser-lovers/browse_thread/thread/bb59533ee5d75b2d

OPD history ... mentioning a number of things in the 60s & 70s, including "Copier II"
http://www-03.ibm.com/ibm/history/exhibits/modelb/modelb_office2.html

it then mentions in 1976 ... Series III Copier/Duplicator (which I presume is same as "Copier III" ... This was essentially what the 6670 derived from with computer interface).

reference to 6670:
http://www-03.ibm.com/ibm/history/reference/faq_0000000011.html

above also mentions powerparallel SP2 with up to 128 nodes announced April 1994.

from these posts:
https://www.garlic.com/~lynn/2001n.html#70
https://www.garlic.com/~lynn/2001n.html#83

earlier version was SP1 ... referenced in these press items:
https://www.garlic.com/~lynn/2001n.html#6000clusters2 11May92
and
https://www.garlic.com/~lynn/2001n.html#6000clusters1 2/17/92

above was just barely month later than this Jan92 meeting in Ellison's conference room
https://www.garlic.com/~lynn/95.html#13

other old email mentioning cluster scale-up
https://www.garlic.com/~lynn/lhwemail.html#medusa

as part of ha/cmp product effort
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2011 10:47:30 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
... there is also song that has line about "Leland Stanford Jr. Varsity Farm".

this just has "Leland Stanford Junior Farm" reference ... but I remember it with varsity thrown in:
http://www.mikeleal.com/campsongs/campsongs2.html

this mentions "leland stanford junior varsity farm":
http://www.thehulltruth.com/dockside-chat/17756-college-b-ball-hawks-zags-et-al.html

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2011 11:21:04 -0500
hancock4 writes:
I guess for purposes of this kind of discussion my idea of "new development" is a new system. That's either something that hasn't existed before, or, an _extensive_ rewirte of an existing system. Obviously there's a lot of "maintenance" going on, including a few new programs here and there.

For several years there's been the idea of a GUI front end for users so they could use their PC easily to access the application, and, a traditional CICS and mainframe back-end to provide the capacity and reliability to serve user needs. In essence, the CICS maps would be replaced by GUI. I've even seen existing CICS maps in service replaced by GUI.

I've always thought this was a good idea, a win-win kind of thing, but I'm not sure how popular it is. I get the impression new stuff is going all new with Oracle servers on the back end and Java on the front end.


re:
https://www.garlic.com/~lynn/2011b.html#74 IF IBM Hand't Bet the Company

one of the big business pushes is to take the call-center support ... many with large rooms of terminals run by CICS ... and do "webified" front end (for the CICS screens, some of this could just be the old-time "screen scrapping" that had been done on PCs with HLLAPI terminal emulation application) ... with various security, authentication, and other constraints ... allowing end-users to do a lot of their own operations ... w/o having to involve call-center.

a decade ago, i visited a datacenter that had large banner on the wall with something about over 120 CICS "regions". this is before CICS had multiprocessor support ... so method to increase concurrent operations was to operate multiple "regions", i.e. independent copies/invokations of CICS. CICS had its own highly optimized multithreaded support ... but (each CICS) appeared as a single (processor), serialized execution to the operating system.

the datacenter claimed to provide dataprocessing for nearly all the cable tv companies in the US ... all the local cable tv office terminal screens were CICS into this datacenter, it did all the billing and accounting ... and backend for cable tv call-centers ... it also was the "head-end" for all the cable tv settop boxes ... sending out signals that eventually changed/updated individual settop boxes.

reference to CICS finally getting multiprocessor support in 2004
https://web.archive.org/web/20041023110006/http://www.yelavich.com/history/ev200402.htm

where you could get simultaneous execution threads running concurrently in multiple processors for the same CICS image (although it required a new application conformace for "threadsafe").

misc. past posts mentioning CICS (&/or BDAM):
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

The first personal computer (PC)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The first personal computer (PC)
Newsgroups: alt.usage.english, alt.folklore.computers
Date: Fri, 11 Feb 2011 11:36:58 -0500
Walter Bushell <proto@panix.com> writes:
I worked with the TOP-10 and -20 and that was definately not the case. The response time varied widely depending on how many other users were on the system. I was doing development and was frequently told to stop using the machine when the user load was high.

re:
https://www.garlic.com/~lynn/2011b.html#80 The first personal computer (PC)

they obvious didn't have my fairshare scheduler ... that i had originally done for cp67 as undergraduate (it actually established generalized resource consumption policies ... with the default being "fairshare")
https://www.garlic.com/~lynn/subtopic.html#fairshare

things would still degrade under heavy load ... but much more gracefully ... and trivial interactive activity tended to be fairly well insulated from heavy users.

This was dropped in part of the simplification morph from cp67 to vm370 ... but I got later re-introduce it as the "resource manager". However, this was after the future system demise and the mad rush to get products back into the 370 hardware&software product pipelines (contributing to decided to release stuff that I had been doing all during the future system period) ... misc. past posts mentioning "future system"
https://www.garlic.com/~lynn/submain.html#futuresys

the distraction of future system (and future system doing its best to kill off all 370 activity ... as part of supporting their strategic position) ... is claimed as contributing to clone processors getting market foothold.

the 23jun69 unbundling announcement (in response to various litigation) had started charging for software (and other changes) ... but the company managed to make the case (with the gov) that kernel software should still be free.
https://www.garlic.com/~lynn/submain.html#unbundle

however, later with the clone processors in the market, the decision was made to transition to charging for kernel software ... and my resource manager was selected as guinea pig ... which met that I to spend a lot of time with business & legal people on policies for kernel software charging. the transition period also had other implications ... discussed in this recent long-winded post:
https://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users

During the transition, there was combination of free & charged-for kernel software ... with sometimes peculiar interaction ... after several years ... the transition was complete and all software was being charged for. About that time, the OCO-wars (object code only) began, in addition to charging for all software ... also no longer provided full source.

--
virtualization experience starting Jan1968, online at home since Mar1970

NASA proves once again that, for it, the impossible is not even difficult

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NASA proves once again that, for it, the impossible is not even difficult.
Newsgroups: comp.arch, comp.arch.embedded
Date: Fri, 11 Feb 2011 12:47:04 -0500
Paul Colin Gloster <Colin_Paul_Gloster@ACM.org> writes:
It should be illegal to allow a vehicle to have enough momentum to be fatal.

re:
https://www.garlic.com/~lynn/2011b.html#79 NASA proves once again that, for it, the impossible is not even difficult.

superfreakonomics has a bit on how cities of the world had much more severe polution problem before the advent of automobiles and internal combustion engine ... and that NYC had higher rate of traffic deaths per thousand from the horse era than they now have from automobiles.

it also explained why the brownstones in NYC where so high above the ground (with front steap steps) ... because of the horse manure piled so high on the streets.

--
virtualization experience starting Jan1968, online at home since Mar1970

If IBM Hadn't Bet the Company

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If IBM Hadn't Bet the Company
Newsgroups: alt.folklore.computers
Date: Fri, 11 Feb 2011 15:02:40 -0500
re:
https://www.garlic.com/~lynn/2011b.html#81 If IBM Hadn't Bet the Company

from long ago and far away

Date: 18 May 1984, 09:59:01 PDT
To: distribution
Subject: Floating Point Registers corrupted by HPO 2.5 and above

You may have already been informed of this problem, but I wanted to make sure everyone in the area had the details.

APAR VM20536 describes a problem with all releases of CP from HPO 2.5 on up. Under certain circumstances it is possible for a virtual machine to be dispatched with the wrong set of floating point registers - a major annoyance, I'm sure you'll agree. For HPO 3.4 the fix for this APAR has a pre-requisite of VM20519.

Information from RETAIN on VM20536 follows:


PIN 5749-DM-K00-952 VM20536-IN-INCORROUT                 ACTIVE
F999-     -S/FIX-                                                 OPTIONS ARE:
CMS PAGE FAULTS ON LOAD OF FLOATING POINT REGISTERS                 PRINT


END OF ABSTRACT FESN6401302- REPORTED RELEASE RD95 ERROR DESCRIPTION: CMS VIRTSYS TAKES A PAGE FAULT AFTER A LOAD FPRS INSTRUCTION. AFTER THE CMS MACHINE IS DISPATCHED TO RUN, THE FPRS ARE RESTORED INCORRECTLY.

PROBLEM SUMMARY: USERS AFFECTED: ALL HPO2.5, HPO3, HPO3.2 AND HPO3.4 USERS FULL PAGE= ON FAST PATH DISPATCH FAILS TO RELOAD FLOATING POINT REGISTERS. PIN PROBLEM CONCLUSION: PG 1, 2 OF 6 DMKDSP WILL BE CHANGED TO RELOAD FLOATING POINT REGS CORRECTLY. TEMPORARY FIX: FIX IS AVAILABLE VIA A PTF REQUEST APAR= VM20536 OWNED BY:
... snip ... top of post, old email index

the following had long list of MVC & CLC instructions in DMKDSP ... included one that was inserted as part of 20536 fix for floating point fast redispatch problem (I've snipped/pruned the list):

Date: 7 May 1986, 22:46:16 EDT
To: distribution
Subject: DMKDSP Performance

The following instructions are from the current version of DMKDSP (HPO 4.2) These are the 2 slowest instructions in the 370 BAL. Its stupid to have them in the most heavily used module in the system.


< ... snip ... >
CLC   VMLSTPRC,LPUADDR+1 WAS VM LAST RUN ON THIS PROC @VA20536 29810920
         CLC   VMLSTPRC,LPUADDR+1 WAS VM LAST RUN ON THIS PROC @V407508 29940000
CLC   VMLSTPRC,LPUADDR+1 WAS VM LAST RUN ON THIS PROC %VC5QAN0 31563000

... snip ... top of post, old email index

something else ... totally unrelated

Date: 10/31/86 12:34:29
From: wheeler

I didn't move off of CP/67 to working on VM/370 until release 2. I have a pure VM/370 Release 2 source tape ... but not any of the release 1 source tapes. You might contact yyyyyy in IBM Kingston about events during release 1.

As you can see from the append, the V=R check was already in the base release 2 VM/370 source. VMA was announced and released with release 2 ... but I expect it was developed and prototyped against release 1 VM/370 system. Check the referenced document and/or contact xxxx (295-nnnn, xxxxx@TDCSYS2) in POK for more information about SSK/V=R support in VMA.
------------------------------------------------------------------

ref. document: :cit.Virtual Machine Assist Feature Architecture Description:ecit., IBM Report TR00.2506, (Jan 1974).

------------------------------------------------------------------

DMKPRV Assembler (1/31/74) VM/370 Release 2 with no updates:

SETREAL  EQU   *              SET REAL STORAGE KEY                      0059800
         TM    VMPSTAT,VMREAL V=R USER ?                       @VA01071 0059810
         BO    *+8            YES - LEAVE ALL OF KEY "AS IS"   @VA01071 0059820
         N     R9,=A(X'F8')   MASK FOR REAL KEY (WITH          @VA01071 0059910
•                             FETCH-PROT. BIT)                          0059920
         TM    SWPFLAG,SWPSHR SHARED SYSTEM PAGE ?             @VA01071 0060000
         BCR   1,R10          YES - JUST LEAVE                          0060100
         LRA   R2,0(0,R6)     SEE IF PAGE IN CORE                       0060200
         BCR   7,R10          PAGE NOT IN CORE                          0060300
         SSK   R9,R2          SET REAL KEY                              0060400
         TM    VMOSTAT,VMSHR  RUNNING SHARED-SEGMENT SYSTEM ?           0060500
         BCR   8,R10          NO -- JUST RE-DISPATCH                    0060600
         LA    R15,240        CHECK FOR KEY OF 0               @VA01071 0060710
         NR    R15,R9         DO NOT SET REAL ZERO KEY         @VA01071 0060720
         BCR   7,R10   <BNZ>  IF NON-ZERO, IT'S OK.            @VA01071 0060730
-----------------------------------------------------------------------

DMKPRV Assembler VM/370 Release 4 with HPO 4.2 updates:

*---------------------------------------------------------------------- 1125621
•        HANDLE SIMULATION OF THE SSK INSTRUCTION.                      1125690
*---------------------------------------------------------------------- 1125759
         SPACE 1                                                        1125828
DOSSK    EQU   *                                               @V6KT2LD 1125897
         IC    R9,VMGPRS+3(R5)     GET NEW KEY                          1126000
         NR    R9,R8          MASK FOR USER                             1127000
         STC   R9,SWPKEY1(R1) SET IN SWPTABLE ENTRY            @VA01071 1128000
SETREAL  EQU   *              SET REAL STORAGE KEY                      1129000
         C     R11,AVMREAL    IS THIS THE V=R USER?            @VA12156 1130200
         BNE   NVER           NO,THEN BUSINESS AS USUAL        @VA12156 1130400
         TM    CPSTAT2,CPSPMODE ARE WE IN SPMODE?              @VA12156 1130600
         BZ    NVER1          NO,THEN BUSINESS AS USUAL        @VA12156 1130800
         SLR   R2,R2          CLEAR R2 FOR SWPTABLE UPDATE     @VA12156 1131000
         STC   R2,SWPKEY1(R1) PUT SWPTABLE KEY TO ZERO FOR     @VA12156 1131200
•                             V=R USER SO CC IS CORRECT FOR             1131400
•                             VMA IN SPMODE.                            1131600
         B     NVER1                                           @VA12156 1131800
NVER     DS    0H                                              @VA12156 1132000
         N     R9,=A(X'F8')   V=V MASK FOR KEY,FETCH-PROT-BIT  @VA01071 1132399
NVER1    DS    0H                                              @VA12156 1132600
         LRA   R2,0(0,R6)     SEE IF PAGE IN CORE                       1134000
         BCR   7,R10          PAGE NOT IN CORE                          1135000
         SSK   R9,R2          SET REAL KEY                              1136000
         BR    R10            FAST DISPATCH                             1137000

... snip ... top of post, old email index

old items mentioning moving off of CP67 to VM370:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

also as I've previously mentioned it wasn't long afterwards that Almaden had operations problem with random tapes being mounted for scratch ... and I lost enormous amount of stuff ... even things that had been replicated on 2-3 different tapes. past posts mentioning Almaden tape library operational problems:
https://www.garlic.com/~lynn/2003j.html#14 A Dark Day
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2007l.html#51 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2008q.html#52 TOPS-10
https://www.garlic.com/~lynn/2009.html#8 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009.html#13 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009d.html#4 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009f.html#59 Backup and Restore Manager for z/VM
https://www.garlic.com/~lynn/2009n.html#66 Evolution of Floating Point
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2010.html#4 360 programs on a z/10
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2010b.html#96 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010d.html#65 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010q.html#45 Is email dead? What do you think?
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed

--
virtualization experience starting Jan1968, online at home since Mar1970




previous, next, index - home