From: Lynn Wheeler <lynn@garlic.com> Subject: Dataprocessing Career Date: 09 Feb 2022 Blog: Facebookre:
After transferring to San Jose Research, got to wander around machine rooms/datacenters in silicon valley (both IBM and non-IBM), including disk engineering (bldg14) and product test (bldg15). They were running pre-scheduled, stand-alone, around-the-clock, 7x24 mainframe testing. They said that they had recently tried MVS, but it had 15min mean-time-between-failure in that environment (requiring manual re-ipl). I offered to rewrite I/O supervisor making it bulllet-proof and never fail, allowed any amount of on-demand, concurrent testing, greatly improving productivity. I then wrote up (internal IBM) research report about all the work and happened to mention the MVS 15min MTBF, which brought done the wrath of the MVS organization on my head (informally I was told that they tried to have me separated from the IBM company, when that didn't work, they tried to make my career as unpleasant as possible ... the joke was on them, they had to get in long line of other people, periodically being told that I had no career, promotions, and/or raises).
In the late70s and early 80s, I was blamed for online computer conferencing (precursor to social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). It really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem, only about 300 actually participated, but claims up to 25,000 was reading. We then printed six copies of 300 pages, with executive summary and summary of summary, packaged in Tandem 3-ring binders and set them to corporate executive committee (folklore is 5of6 wanted to fire me).
I was introduced to John Boyd about the same time and would sponsor
his briefings at IBM (finding something of kindred spirit) ... in the
50s as instructor at USAF weapons school, he was referred to as 40sec
Boyd (challenge to all comers that he could beat them within 40sec),
considered possibly the best fighter pilot in the world. He then
invented E/M theory and used it to redo the original F15 design
(cutting weight nearly in half), and used it for YF16 & YF17 (which
become the F16 & F18). By the time he passes in 1997, the USAF had
pretty much disowned him and it was the Marines at Arlington. One of
his quotes:
There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To Be
or To Do, that is the question.
... snip ...
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
online computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
Boyd posts & URLs
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: On why it's CR+LF and not LF+CR [ASR33] Newsgroups: alt.folklore.computers Date: Wed, 09 Feb 2022 10:48:05 -1000David Lesher <wb8foz@panix.com> writes:
mentioned recently univ. was sold 360/67 to replace 709/1401 supposedly for tss/360, but never came to production fruition so ran as 360/65 with os/360. The IBM TSS/360 SE would do some testing on weekends (I sometimes had to share my 48hr weekend time).
Shortly after CP67 was delivered to univ, got to play with it on weekends (in addition to os/360 support). Very early on (before I started rewriting lots of CP67 code), the IBM SE and I put together a fortran edit/compile/execute benchmark with simulated users, his for TSS/360, mine for CP67/CMS. His TSS benchmark had four users and had worse interactive response and throughput than my CP67/CMS benchmark with 35 users.
Later at IBM, I did a paged-mapped filesystem for CP67/CMS and would
explain I learned what not to do from TSS/360. The (failed) Future
System somewhat adopted its "single-level-store" from TSS/360 ... some
FS details
http://www.jfsowa.com/computer/memo125.htm
... and I would periodically ridicule FS (in part of how they were doing
"single-level-store") ... which wasn't exactly a career enhancing
activity. Old quote from Ferguson & Morris, "Computer Wars: The Post-IBM
World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project
1st half of the 70s:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat ... But because of the heavy investment of face by
the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time, during
F/S, outspoken criticism became politically dangerous," recalls a former
top executive.
... snip ...
one of the final nails in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 software was redone for FS machine made out of the fastest available technology, it would have the throughput of 370/145 (about 30 times slowdown).
The death of FS also gave virtual memory filesystems really bad reputation inside IBM ... regardless of how they were implemented.
trivia: AT&T had a contract with IBM for a stripped-down TSS/360 kernel referred to SSUP ... for UNIX to be layered on top. Part of the issue is that mainframe hardware support required production/type-1 RAS&EREP for maint. It turns out that adding that level of support to UNIX, was many times larger than doing straight UNIX port to 370 (as well as layering UNIX on top SSUP was significantly simpler).
This also came up for both Amdahl (gold/uts) and IBM (UCLA Locus for AIX/370) ... both running them under VM370 (providing the necessary type-1 RAS&EREP).
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some more recent posts mentioning SSUP:
https://www.garlic.com/~lynn/2021k.html#64 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021e.html#83 Amdahl
https://www.garlic.com/~lynn/2020.html#33 IBM TSS
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2018d.html#93 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017g.html#102 SEX
https://www.garlic.com/~lynn/2017d.html#82 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
https://www.garlic.com/~lynn/2017.html#20 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2014j.html#17 The SDS 92, its place in history?
https://www.garlic.com/~lynn/2014f.html#74 Is end of mainframe near ?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dataprocessing Career Date: 09 Feb 2022 Blog: Facebookre:
passthru drift ... well before IBM/PC & HLLAPI, there was
PARASITE/STORY (done by VMSG author, very early prototype VMSG was
used by PROFS group for email client, PARASITE/STORY was coding
marvel) ... using simulated 3270s on the same mainframe or via PVM,
creating simulated 3270s elsewhere on the internal
network. Description of parasite/story
https://www.garlic.com/~lynn/2001k.html#35
STORY example to log onto RETAIN and automagically retrieve PUT Bucket
(using the YKT PVM/CCDN gateway)
https://www.garlic.com/~lynn/2001k.html#36
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Final Rules of Thumb on How Computing Affects Organizations and People Date: 09 Feb 2022 Blog: FacebookFinal Rules of Thumb on How Computing Affects Organizations and People
I would say "e-commerce" prevailed because it used the bank "card-not-present" model and processing (after leaving IBM was brought in as consultant to small client/server startup that wanted to do payment transactions on their servers, the startup had also invented this technology they called "SSL" they wanted to use ... had responsible for everything between their servers and financial payment networks, did gateway that simulated one of the existing vendor protocols that was in common use by Las Vegas hotels and casinos).
e-commerce gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
Counter ... mid-90s, billions were being spent on redoing (mostly mainframe) financial software for implementing straight through processing, some of it dating back to the 60s, real-time had been added over the years ... but "settlement" was still being done in the overnight batch window. Problem in the 90s was overnight batch window was being shortened because of globalization ... which was also contributing to other workload increases, (both) resulting in the overnight batch window being exceeded. The straight through processing implementations were planning on using large number of killer micros with industry standard parallelization libraries. Warnings by me and others about the industry parallelization libraries had hundred times the overhead of batch cobol, were ignored ... until large pilots were going down in flames (increased overhead totally swamping anticipated throughput increase with large number of the killer micros).
After turn of the century was involved in somebody doing demos of something similar but relied on financial transaction language that generated fine-grain SQL statements that were easily parallelized by RDBMS clusters. Its throughput handled many times the transaction rates of largest production operations and was dependent on 1) increase in micro performance and associated disks and 2) significant cluster RDBMS throughput optimization done by IBM and other RDBMS vendors. It was demo'ed to various industry financial bodies with high acceptance ... and then hit brick wall. Were finally told that there were too many executives that still bore the scars of the failed 90s efforts.
had worked on RDBMS cluster-scale-up at IBM for HA/CMP Product
https://www.garlic.com/~lynn/subtopic.html#hacmp
some recent e-commerce posts
https://www.garlic.com/~lynn/2022.html#3 GML/SGML/HTML/Mosaic
https://www.garlic.com/~lynn/2019d.html#84 Steve King Devised an Insane Formula to Claim Undocumented Immigrants Are Taking Over America
https://www.garlic.com/~lynn/2019d.html#74 Employers escape sanctions, while the undocumented risk lives and prosecution
https://www.garlic.com/~lynn/2018f.html#119 What Minimum-Wage Foes Got Wrong About Seattle
https://www.garlic.com/~lynn/2018f.html#35 OT: Postal Service seeks record price hikes to bolster falling revenues
https://www.garlic.com/~lynn/2018d.html#58 We must stop bad bosses using migrant labour to drive down wages
https://www.garlic.com/~lynn/2018b.html#72 Doubts about the HR departments that require knowledge of technology that does not exist
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2018.html#106 Predicting the future in five years as seen from 1983
overnight batch window posts
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2018f.html#85 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SEC Set to Lower Massive Boom on Private Equity Industry Date: 10 Feb 2022 Blog: FacebookSEC Set to Lower Massive Boom on Private Equity Industry
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: On why it's CR+LF and not LF+CR [ASR33] Newsgroups: alt.folklore.computers Date: Thu, 10 Feb 2022 08:07:27 -1000Lynn Wheeler <lynn@garlic.com> writes:
... other RAS&EREP trivia/drift: when transferred to san jose research in the late 70s, got to wander around of lot of datacenters in silicon valley (both IBM&non-IBM), including disk engineering (bldg14) and disk product test (bldg15) across the street. They were running pre-scheduled, stand-alone, 7x24 mainframes for engineering testing. They mentioned that they had recently tried MVS, but it had 15min mean-time-between-failure in that environment, requiring manual re-ipl (boot). I offerred to rewrite input/output supervisor, making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivitty. I then wrote (internal) research report about the activity and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head.
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: On why it's CR+LF and not LF+CR [ASR33] Newsgroups: alt.folklore.computers Date: Thu, 10 Feb 2022 11:47:34 -1000Peter Flass <peter_flass@yahoo.com> writes:
... real topic drift ... In the late70s & early80s, I was blamed for
online computer conferencing on the internal network ... it really took
off spring 1981 after i distributed trip report of visit to Jim Gray at
Tandem. Only around 300 participated, but claim was the upwards of
25,000 were reading. Six copies of about 300 pages were printed with
executive summary and summary of summary, packaged in Tandem 3-ring
binders and sent to corporate executive committee (folklore is that 5of6
wanted to fire me) ... some of summary of summary:
• The perception of many technical people in IBM is that the company is
rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to affect
revenue, at which point recovery may be impossible
• Many technical people are extremely frustrated with their management and
with the way things are going in IBM. To an increasing extent, people
are reacting to this by leaving IBM Most of the contributors to the
present discussion would prefer to stay with IBM and see the problems
rectified. However, there is increasing skepticism that correction is
possible or likely, given the apparent lack of commitment by management
to take action
• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive environment.
... took another decade (1981-1992) ... IBM had gone into the red and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company .... reference gone behind paywall but mostly
lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup).
... previously, in my executive exit interview, was told they could have forgiven me for being wrong, but they never were going to forgive me for being right.
... the joke somewhat on the MVS group ... they had to get in long line of people that wished I was fired.
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: USENET still around Date: 11 Feb 2022 Blog: Facebookusenet still around ... as well as gatewayed to google groups ... alt.folklore.computers
re:
https://www.garlic.com/~lynn/2022.html#127 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2022b.html#1 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2022b.html#5 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2022b.html#6 On why it's CR+LF and not LF+CR [ASR33]
Other usenet trivia:
Leaving IBM, I had to turn in everything IBM, including a RS6000/320. I've mentioned before that the GPD/Adstar VP would periodically ask for help with some of his investments in distributed computing startups that would use IBM disks (partial work around to communication group blocking all IBM mainframe distributed computing efforts). In any case, he has my (former) RS6000/320 given to me, delivered to my house.
Also, SGI (graphical workstations)
https://en.wikipedia.org/wiki/Silicon_Graphics
has bought MIPS (risc) Computers
https://en.wikipedia.org/wiki/MIPS_Technologies
and has gotten a new president. MIPS new president asks me to (first)
take home his executive SGI Indy (to configure) and then 2) keep it
for him (when he leaves MIPS, I have to return it).
So now in my office at home I have RS/6000 320, SGI Indy and a couple
486 PCs. Pagesat runs a pager service ... but is also offering a full
satellite usenet feed
http://www.art.net/lile/pagesat/netnews.html
I get offered a deal ... if I do unix and dos drivers for the usenet
satellite modem, and write an article about it for Boardwatch magazine
https://en.wikipedia.org/wiki/Boardwatch
I get a free installation at home with full usenet feed. I also put up
a "waffle" usenet bbs on one of the 486 PCs.
https://en.wikipedia.org/wiki/Waffle_(BBS_software)
picture of me on the hill behind the house installing pagesat dish
https://www.garlic.com/~lynn/pagesat.jpg
periodically reposted: senior disk engineer getting talk scheduled at internal, world-wide, annual communication group conference in the late 80s, supposedly on 3174 performance, but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had stranglehold on datacenters with its corporate strategic ownership of everything that crossed datacenter wall and was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm. The disk division was seeing drop in disk sales with customers moving to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but the communication group (with their corporate strategic datacenter stranglehold) would veto them.
posts about FS
https://www.garlic.com/~lynn/submain.html#futuresys
posts about playing disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
posts about online computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc
posts about communication group
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Porting APL to CP67/CMS Date: 11 Feb 2022 Blog: FacebookCSC also ported apl\360 to cp67/cms for cms\apl ... also did some of early stuff for what shipped later to customers as VS/Repack ... would trace instruction and storage use and do semi-automagic module reorg to improve performance in demand page virtual memory (also used by several internal IBM orgs as part of moving OS/360 software to VS1 and VS2.
We had 1403 printout taped floor to ceiling down the halls of cms\apl storage use ... time along the hallway (horizontal), storage address vertical ... looked like extreme saw tooth. APL\360 had 16kbyte workspaces that were swapped, storage management allocated new storage location for every assignment and then garbage collected when it ran out of storage and compressed low address. Didn't make any difference in small swapped workspace ... but in demand page large virtual memory would quickly touch every virtual page before garbage collection ... was guaranteed to cause page thrashing.
That was completely reworked to eliminate the severe page thrashing. Also added API for system services (for things like file i/o, combination of large workspaces and system services API allowed real world application). APL purists criticized the API implementation ... was eventually replaced with "shared variables"
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CMS\APL was also used for deploying the APL-based sales&marketing support apps on (CP67) HONE systems.
The Palo Alto Science Center ... then did the migration from CP67/CMS to VM370/CMS for APL\CMS. PASC also did the apl microcode assist for 370/145 (ran lots of APL on 145 at speed of 370/168) and APL for IBM 5100.
When HONE consolidated the US HONE systems in Palo Alto, their datacenter was across the back parking lot from PASC (by this time, HONE had migrated from CP67/CMS to VM370/CMS).
When HONE consolidated the US HONE systems in Palo Alto, their datacenter was across the back parking lot from PASC (by this time, HONE had migrated from CP67/CMS to VM370/CMS). An issue for HONE was it also needed large storage and I/O of 168s ... so was unable to take advantage of the 370/145 apl microcode assist. HONE had max'ed out number of 168s in a single-system-image, loosely-coupled (sharing all disks) configuration with load-balancing and fall-over. The VM370 development group had simplified and dropped a lot of stuff in the CP67->VM370 morph (including shared-memory, tightly-coupled multiprocessor support). After migrating lots of stuff from CP67 into VM370 release 2 ... I got SMP multiprocessor support back into a VM370 release 3 version ... initially specifically for HONE so they could add a 2nd processor to every 168 system.
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP posts
https://www.garlic.com/~lynn/subtopic.html#smp
trivia: about HONE loosely-coupled ... they used a "compare-and-swap" channel program mechanism (analogous to compare-and-swap 370 instruction) ... read the record, update it, then write it back using the compare-and-swap (search equal then only update/write record if it hadn't been changed).
This had significantly lower overhead than reserve/release. It also had the advantage over the ACP/TPF loosely-coupled locking RPQ for the 3830 controller. The 3830 lock RPQ only worked for all disks connected to single 3830 controller ... limiting ACP/TPF configurations to four systems (because of the 3830 four channel interface). HONE approach worked with string-switch ... where string of 3330 drives were connected to two different 3830 controllers (each with four channel switch) ... allowing eight systems operating in loosely-coupled mode (and then adding a 2nd cpu to each system).
recent ACP/TPF post
https://www.garlic.com/~lynn/2021i.html#77
referencing old post
https://www.garlic.com/~lynn/2008i.html#39 American Airlines
with old ACP/TPF lock rpq email
https://www.garlic.com/~lynn/2008i.html#email800325
specific posts mentioning HONE single system image
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2018d.html#83 CMS\APL
https://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017d.html#42 What are mainframes
https://www.garlic.com/~lynn/2016.html#63 Lineage of TPF
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2014g.html#103 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014c.html#88 Optimization, CPU time, and related issues
https://www.garlic.com/~lynn/2012.html#10 Can any one tell about what is APL language
https://www.garlic.com/~lynn/2011n.html#35 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#58 Collection of APL documents
https://www.garlic.com/~lynn/2010l.html#20 Old EMAIL Index
https://www.garlic.com/~lynn/2008j.html#50 Another difference between platforms
https://www.garlic.com/~lynn/2003b.html#26 360/370 disk drives
... and some old posts mentioning VS/Repack
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021c.html#37 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2019c.html#25 virtual memory
https://www.garlic.com/~lynn/2017j.html#86 VS/Repack
https://www.garlic.com/~lynn/2017j.html#84 VS/Repack
https://www.garlic.com/~lynn/2016h.html#111 Definition of "dense code"
https://www.garlic.com/~lynn/2016f.html#92 ABO Automatic Binary Optimizer
https://www.garlic.com/~lynn/2015f.html#79 Limit number of frames of real storage per job
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015c.html#66 Messing Up the System/360
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2014b.html#81 CPU time
https://www.garlic.com/~lynn/2013k.html#62 Suggestions Appreciated for a Program Counter History Log
https://www.garlic.com/~lynn/2012o.html#20 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#19 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012j.html#82 printer history Languages influenced by PL/1
https://www.garlic.com/~lynn/2012j.html#20 Operating System, what is it?
https://www.garlic.com/~lynn/2012d.html#73 Execution Velocity
https://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory
https://www.garlic.com/~lynn/2010m.html#5 Memory v. Storage: What's in a Name?
https://www.garlic.com/~lynn/2010k.html#9 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010k.html#8 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010j.html#81 Percentage of code executed that is user written was Re: Delete all members of a PDS that is allocated
https://www.garlic.com/~lynn/2010j.html#48 Knuth Got It Wrong
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Porting APL to CP67/CMS Date: 11 Feb 2022 Blog: Facebookre:
Some of the early remote (dialup) CMS\APL users were Armonk business planners. They sent data to cambridge of the most valuable IBM business information (detailed customer profiles, purchases, kind of work etc) and implemented APL business modeling using the data. In cambridge, we had to demonstrate extremely strong security ... in part because various professors, staff, and students from Boston/Cambridge area universities were also using the CSC CP67/CMS system.
csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Seattle Dataprocessing Date: 11 Feb 2022 Blog: FacebookSpent a lot of time in Seattle area, but not for IBM. In grade school (up over hill from Boeing field), the Boeing group that sponsored our cub scout pack gave evening plane ride, took off from Boeing field, flew around skies of Seattle and then back to Boeing field. While undergraduate, I'm hired into small group in Boeing CFO office to help with the formation with Boeing Computer Services (consolidate all dataprocessing in a independent business unit to better monetize the investment, including offering services to non-Boeing entities).
Renton center has a couple hundred million in 360s, 360/65s arriving faster than they could be installed (boxes constantly staged in hallways around machine room), and 747#3 flying skies of Seattle getting FAA flt certification. Lots of politics between CFO and Renton datacenter director. The CFO only had small machine room at Boeing field with 360/30 for payroll, although they enlarge it and install a 360/67 for me to play with when I'm not doing other stuff. When I graduate, I join IBM science center on the opposite coast and then transfer to san jose research.
Much later, after cluster scale-up for our HA/CMP product is transferred, announced as IBM supercomputer and we are told we can't work on anything with more than four processors, we leave IBM.
Later, we are brought into small client/server startup as consultants, two former Oracle people that we had been working with on RDBMS cluster scale-up, are there responsible for something called "commerce server" and want to do payment transactions on the server. The startup had also invented this technology they called "SSL", the result is now frequently called "electronic commerce".
In 1999, financial company asks us to spend a year in Seattle working
on electronic commerce projects with a few companies in the area,
including large PC company in Redmond and a Kerberos/security company
out in Issaquah (it had contract with the Redmond PC company to port
Kerberos, becomes active directory). We have regular meetings with the
CEO of the Kerberos company, previously he had been head of IBM POK,
then IBM BOCA, and also at Perot Systems
https://www.nytimes.com/1997/07/26/business/executive-who-oversaw-big-growth-at-perot-systems-quits.html
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
other recent posts mentioning boeing computer services
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021i.html#89 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#6 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021h.html#64 WWII Pilot Barrel Rolls Boeing 707
https://www.garlic.com/~lynn/2021h.html#46 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#5 Availability
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2020.html#29 Online Computer Conferencing
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Seattle Dataprocessing Date: 11 Feb 2022 Blog: Facebookre:
... oops didn't realize that nytimes article might be behind paywall
for some ... similar, but different article, PEROT'S PARTNERS
https://www.dmagazine.com/publications/d-magazine/1997/january/perots-partners/
My wife had run into him in prior life when she was in the GBURG JES group (and one of the catchers for JES3). He cons her into going to POK to be in charge of loosely-coupled architecture where she does Peer-coupled shared data architecture. She doesn't remain long because 1) constant battles with the communication group trying to force her into using VTAM for loosely-coupled operation. and 2) little uptake (until much later for sysplex and parallel sysplex) except for IMS hot-standby (she has story about asking Vern Watts who he will ask to get permission for doing hot-standby, he tells her nobody, he will just do it and tell IBM about it when it was all done).
Peer-Coupled Shared Data posts
https://www.garlic.com/~lynn/submain.html#shareddata
trivia: Dec 1999, we have booth at World Wide Retail Banking Show in Miami
with some of the companies we were working with (not just in Seattle Area).
Old 1Dec1999 post
https://www.garlic.com/~lynn/99.html#217
press release at the show
https://www.garlic.com/~lynn/99.html#224
AADS references
https://www.garlic.com/~lynn/x959.html#aads
X9.59 references
https://www.garlic.com/~lynn/x959.html#x959
X9.59 posts
https://www.garlic.com/~lynn/subpubkey.html#x959
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: TCP/IP and Mid-range market Date: 12 Feb 2022 Blog: Facebookre:
other trivia, there are periodic posts by former DEC people in (usenet) alt.folklore.computers about person behind DEC VMS (cutler) going to redmond to do NT for m'soft. other trivia, the IBM FS project was completely different from 370 and was going to completely replace it (internal politics was killing off 370 efforts and the lack of new 370s is credited with giving clone 370 makers, their market foothold).
other FS trivia:
http://www.jfsowa.com/computer/memo125.htm
when FS implodes their is mad rush to get stuff back into the 370
product pipelines, including kick off the quick&dirty 3033 & 3081
efforts. Ferguson & Morris, "Computer Wars: The Post-IBM World", Time
Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project 1st half of
the 70s:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive.
... snip ...
The head of IBM POK (mainframes) also convinced corporate to kill the vm370 (virtual machine) product, shutdown the development location (burlington mall, mass, off 128, former SBC bldg) and transfer all the people to POK to support MVS/XA development (Endicott eventually manages to save the vm370 product mission, but had to reconstitute a development group from scratch). They weren't going to tell the VM group until the very last minute, to minimize those that might escape. The information managed to leak early and several managed to escape ... including to the infant VMS effort at DEC. Joke was one of the largest contributors to VMS was the head of IBM POK. There was also a witch hunt for the source of the leak, fortunately for me ... nobody gave up the source.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360 Performance Date: 13 Feb 2022 Blog: FacebookWithin two semester hrs of taking intro to fortran/computers, I was hired fulltime at the univ to be responsible for os/360. The univ. had been sold a 360/67 for tss/360 (to replace 709/1401), but it never quite came to production fruition so ran as 360/65 (w/os360). Univ. shutdown datacenter over the weekend and I had to whole place to myself (although 48hrs w/o sleep could make monday morning classes hard). When CSC came out (Jan1968) with CP67 (3rd location after CSC & Lincoln Labs), I got to play with it also on the weekends (rewriting lots of the code). I was then part of the CP67 announcement at SHARE, Houston, spring 1968.
Originally, on 709 tape->tape, student jobs ran less than second, on
360/65 they ran over a minute. I installed HASP which cuts that in
half. Then I started redoing SYSGEN to carefully place datasets and
PDS members to optimized arm seek and PDS (multi-track) directory
member search, cutting it by another 2/3rds to 12.9secs. It never got
better than 709 until installed Univ. of Waterloo's WATFOR. For CP67 I
started by rewrting a lot of code for running OS/360 in virtual
machine. Part of Fall68 SHARE presentation.
https://www.garlic.com/~lynn/94.html#18
OS/360 benchmark ran 322sec bare machine, originally under CP67 ran 856sec (534sec CP67 CPU). Rewriting lots of code got it down to 435sec (113sec CP67 CPU), reduced CP67 CPU from 534sec to 113sec, or reduction of 421sec.
science center (& cp67) posts
https://www.garlic.com/~lynn/subtopic.html#545tech
A 2011 post, History of VM Performance (originally Oct86 SEAS,
European SHARE) ... w/copy of presentation being regiven at 2011 DC
Hillgang user group meeting
https://www.garlic.com/~lynn/2011c.html#72
some other past posts mentioning oct86 SEAS:
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021h.html#82 IBM Internal network
https://www.garlic.com/~lynn/2021g.html#46 6-10Oct1986 SEAS
https://www.garlic.com/~lynn/2021e.html#65 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2011e.html#22 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory
other early CP67 trivia, had support for 1052 & 2741 with automagic terminal type recognition (and used terminal controller SAD CCW to change terminal type scanning for each port). The univ. had some number of ASCII terminals and so I added ASCII support (extending automagic terminal type for ASCII).
I then wanted to have single dial-in number of all terminals ... hunt
group
https://en.wikipedia.org/wiki/Line_hunting
for all terminals. Didn't quite work since I could switch line scanner
for each port (on IBM telecommunication controller), IBM had took
short cut and hard wired line speed for each port (TTY was different
line speed from 2741&1052). Thus was born univ. project to do a clone
controller, built a mainframe channel interface board for Interdata/3
programmed to emulate mainframe telecommunication controller with the
addition it could also do dynamic line speed determination. Later it
was enhanced with Interdata/4 for the channel interface and cluster of
Interdata/3s for the port interfaces. Interdata (and later
Perkin/Elmer) sell it commercially as IBM clone controller. Four of us
at the univ. get written up responsible for (some part of the) clone
controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
other 360 terminal controller line scanner trivia ... when the ascii/tty terminal line scanner arrived (for CE to install in the controller) ... it was in a box from "heathkit".
The clone controller business is claimed to be the major motivation
for the IBM Future System effort in the 70s (make the interface so
complex that clone makers couldn't keep up). From the law of
unintended consequences: FS was completely different from 370 and was
going to completely replace it and internal politics was shutting down
the 370 projects ... the lack of new IBM 370 offerings is claimed to
give the clone 370 processor makers their market foothold (FS as
countermeasure to clone controllers becomes responsible for rise of
clone processors). Some FS details
http://www.jfsowa.com/computer/memo125.htm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM selectric terminals ran with tilt-rotate code (so needed translate from EBCDIC to tilt-rotate code) ... got a table of what characters were located where on the ball. What ever characters you think you have in the computer, has to be translated into the tilt-rotate code for that character on the ball. There were standard balls making a lot of tilt-rotate codes common. But it was possible to have things like APL-ball with lots of special characters and needing to know where they were on the ball and the corresponding tilt-rotate codes (change the selectric ball, could require having the corresponding table for character positions on the ball). So there really wasn't EBCDIC code for selectric terminals ... it was "tilt-rotate" code.
Teletype ASCII terminals did run with direct ASCII code (on ASCII
computers, just send the same data directly to terminal ... not
required to change into tilt-rotate code). "AND" the greatest computer
"goof", 360 was supposed to be ASCII, by the (IBM) father of ASCII,
gone 404 from wayback machine:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Who Goofed?
The culprit was T. Vincent Learson. The only thing for his defense is
that he had no idea of what he had done. It was when he was an IBM
Vice President, prior to tenure as Chairman of the Board, those lofty
positions where you believe that, if you order it done, it actually
will be done. I've mentioned this fiasco elsewhere. Here are some
direct extracts:
... snip ...
i.e. the ascii unit record equipment wasn't ready for the 360 announce so had to rely (supposedly temporarily?) on BCD machines (adapted for EBCDIC)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Arbitration Antics: Warren, Porter, Press Regulator to Explain Yet Another Way Wells Fargo Found to Game the System Date: 13 Feb 2022 Blog: FacebookArbitration Antics: Warren, Porter, Press Regulator to Explain Yet Another Way Wells Fargo Found to Game the System
fraud, risk, exploits, threats, vulnerability posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
other past posts mentioning Wells Fargo
https://www.garlic.com/~lynn/2019d.html#64 How the Supreme Court Is Rebranding Corruption
https://www.garlic.com/~lynn/2019b.html#99 Student Loan Forgiveness Program Offers False Hope, Rejects 99% of Applications
https://www.garlic.com/~lynn/2019b.html#9 England: South Sea Bubble - The Sharp Mind of John Blunt
https://www.garlic.com/~lynn/2018d.html#60 Dirty Money, Shiny Architecture
https://www.garlic.com/~lynn/2018d.html#50 OCC Covering Up for Wells Fargo Type Abuses at Other Banks
https://www.garlic.com/~lynn/2017j.html#64 Wages and Productivity
https://www.garlic.com/~lynn/2017j.html#59 Wall Street Wants to Kill the Agency Protecting Americans From Financial Scams
https://www.garlic.com/~lynn/2017e.html#93 Ransomware on Mainframe application ?
https://www.garlic.com/~lynn/2017b.html#52 when to get out???
https://www.garlic.com/~lynn/2017b.html#6 OT: Trump Moves to Roll Back Obama-Era Financial Regulations
https://www.garlic.com/~lynn/2017b.html#0 Trump to sign cyber security order
https://www.garlic.com/~lynn/2016f.html#54 U.S. Big Banks: A Culture of Crime
https://www.garlic.com/~lynn/2016f.html#52 U.S. Big Banks: A Culture of Crime
https://www.garlic.com/~lynn/2016c.html#92 Goldman and Wells Fargo FINALLY Admit They Committed Fraud
https://www.garlic.com/~lynn/2016c.html#86 Wells Fargo "Admits Deceiving" U.S. Government, Pays Record $1.2 Billion Settlement
https://www.garlic.com/~lynn/2016c.html#85 Wells Fargo "Admits Deceiving" U.S. Government, Pays Record $1.2 Billion Settlement
https://www.garlic.com/~lynn/2016c.html#84 Wells Fargo "Admits Deceiving" U.S. Government, Pays Record $1.2 Billion Settlement
https://www.garlic.com/~lynn/2015h.html#25 Hillary Clinton's Glass-Steagall
https://www.garlic.com/~lynn/2015g.html#70 AIG freezes defined-benefit pension plan
https://www.garlic.com/~lynn/2015f.html#64 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015.html#90 NY Judge Slams Wells Fargo For Forging Documents... And Why Nothing Will Change
https://www.garlic.com/~lynn/2014d.html#78 Wells Fargo made up on-demand foreclosure papers plan: court filing charges
https://www.garlic.com/~lynn/2014d.html#64 Wells Fargo made up on-demand foreclosure papers plan: court filing charges
https://www.garlic.com/~lynn/2014d.html#47 Stolen F-35 Secrets Now Showing Up in China's Stealth Fighter
https://www.garlic.com/~lynn/2014d.html#46 Wells Fargo made up on-demand foreclosure papers plan: court filing charges
https://www.garlic.com/~lynn/2014c.html#57 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013l.html#38 OT: NYT article--the rich get richer
https://www.garlic.com/~lynn/2013l.html#37 Money Laundering Exposed As A Key Component Of The Housing Bubble's "All Cash" Bid
https://www.garlic.com/~lynn/2013h.html#36 CLECs, Barbara, and the Phone Geek
https://www.garlic.com/~lynn/2013e.html#42 More Whistleblower Leaks on Foreclosure Settlement Show Both Suppression of Evidence and Gross Incompetence
https://www.garlic.com/~lynn/2013d.html#63 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#58 More Whistleblower Leaks on Foreclosure Settlement Show Both Suppression of Evidence and Gross Incompetence
https://www.garlic.com/~lynn/2013c.html#43 More Whistleblower Leaks on Foreclosure Settlement Show Both Suppression of Evidence and Gross Incompetence
https://www.garlic.com/~lynn/2013.html#50 How to Cut Megabanks Down to Size
https://www.garlic.com/~lynn/2012p.html#49 Regulator Tells Banks to Share Cyber Attack Information
https://www.garlic.com/~lynn/2012n.html#55 U.S. Sues Wells Fargo, Accusing It of Lying About Mortgages
https://www.garlic.com/~lynn/2012n.html#12 Why Auditors Fail To Detect Frauds?
https://www.garlic.com/~lynn/2012l.html#85 Singer Cartons of Punch Cards
https://www.garlic.com/~lynn/2012l.html#48 The Payoff: Why Wall Street Always Wins
https://www.garlic.com/~lynn/2012e.html#37 The $30 billion Social Security hack
https://www.garlic.com/~lynn/2011n.html#49 The men who crashed the world
https://www.garlic.com/~lynn/2011k.html#56 50th anniversary of BASIC, COBOL?
https://www.garlic.com/~lynn/2011g.html#30 Bank email archives thrown open in financial crash report
https://www.garlic.com/~lynn/2011f.html#52 Are Americans serious about dealing with money laundering and the drug cartels?
https://www.garlic.com/~lynn/2011b.html#42 Productivity And Bubbles
https://www.garlic.com/~lynn/2011.html#50 What do you think about fraud prevention in the governments?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 14 Feb 2022 Blog: FacebookIn 1980, STL was bursting at the seams and were moving 300 people from the IMS group to offsite bldg. They had tried "remote 3270" and found the human factors unacceptable. I get con'ed into doing channel extender support so they place channel attached 3270 controllers at the offsite bldg ... with no perceptible human factors difference between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get it veto'ed (afraid if it was in the market, it would make it harder to justify their stuff). A side effect was that the (really slow, high channel busy) 3270 controllers had been spread around all the mainframe (disk) channels, moving the channel attached 3270 controllers remotely and replacing them with a really high-speed box (for all the 3270 activity) had the side-effect of drastically cutting the 3270 related channel busy (for same amount of traffic) ... allowing more disk throughput and improving over all system throughput by 10-15%.
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
IBM channels were 3mbyte/sec half duplex. Mid-80s LANL (national lab)
was behind standardization of the Cray 100mbyte/sec channel which
becomes HiPPI
https://en.wikipedia.org/wiki/HIPPI
In 1988, I was asked to help LLNL (national lab) standardize some
serial stuff they were playing which quickly becomes fibre channel
standard (including some stuff I had done in 1980) ... started out
1gbit/sec, full-duplex, 2gbit/sec aggregate, 200mbyte/sec.
https://en.wikipedia.org/wiki/Fibre_Channel
For a time HIPPI group was doing a serial HIPPI in competition with
FCS (I have bunch of meeting notes from both efforts).
The POK group finally get their serial stuff released in 1990 with ES/9000 as ESCON when it is already obsolete (17mbytes/sec, FCS aggregate 200mbytes/sec). Then some POK engineers become involved in FCS and define an enormously heavy-weight protocol that radically reduces the native throughput that is eventually released as FICON.
The most recent published "peak I/O" benchmark I can find is for max configured z196 getting 2M IOPS with 104 FICON (running over 104 FCS) ... using emulated CKD disks on industry standard fixed-block disks (no real CKD disks made for decades). About the same time there is a FCS announced for E5-2600 blades (standard in cloud megadatacenters at the time) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS) using industry standard disks.
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
Other trivia, last product did at IBM was HA/CMP and working with
national labs on technical/scientific cluster scale-up (including
porting the LLNL Cray filesystem to HA/CMP) and with RDBMS vendors on
commercial cluster scale-up. HA/CMP was also doing configurations with
Hursley 9333 ... full-duplex 80mbit/sec serial copper. I wanted 9333
to evolve into (slower=speed) interoperable FCS ... but instead it
evolves into 160mbit/sec (later faster) incompatible SSA.
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
old post mentioning cluster scale-up meeting Jan1992 in (oracle ceo)
Ellison conference room (16-way loosely-coupled/cluster by my92,
128-way loosely-coupled/cluster by ye92)
https://www.garlic.com/~lynn/95.html#13
within a few weeks of the Ellison meeting, cluster scale-up is
transferred, announced as IBM supercomputer (for scientific/technical
*ONLY*) and we are told we couldn't work on anything with more than
four processors. We leave IBM a few months later.
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
3880/3090 trivia: originally, 3090 had configured number of channels assuming the 3880 controller would have channel busy similar to 3830 controller, but with 3mbyte/sec channel throughput. However the 3880 had special hardware to handle data transfer but the 3880 microprocessor was really slow ... a combination of the IBM half-duplex channel protocol and the slow 3880 microprocessor drastically drove up channel busy. As a result they had to drastically increase the number of channels (to offset the drastic increase in channel busy), which had side-effect of needing an extra TCM (3090 joke that they were going to bill the 3880 group for the extra 3090 manufacturing cost). The marketing people eventually respin the significant increase in 3090 channels as extraordinary I/O machine (rather than just to offset the extraordinary increase in channel busy).
posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
some 3880/jib-prime microprocessor posts
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021.html#60 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2017g.html#61 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016g.html#76 IBM disk capacity
https://www.garlic.com/~lynn/2016e.html#56 IBM 1401 vs. 360/30 emulation?
https://www.garlic.com/~lynn/2016e.html#45 How the internet was invented
https://www.garlic.com/~lynn/2016b.html#81 Asynchronous Interrupts
https://www.garlic.com/~lynn/2016b.html#79 Asynchronous Interrupts
https://www.garlic.com/~lynn/2013n.html#69 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#57 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2013i.html#0 By Any Other Name
https://www.garlic.com/~lynn/2013h.html#86 By Any Other Name
https://www.garlic.com/~lynn/2013d.html#16 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012p.html#17 What is a Mainframe?
https://www.garlic.com/~lynn/2012o.html#28 IBM mainframe evolves to serve the digital world
https://www.garlic.com/~lynn/2012e.html#27 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012d.html#75 megabytes per second
https://www.garlic.com/~lynn/2012d.html#28 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2011p.html#128 Start Interpretive Execution
https://www.garlic.com/~lynn/2011j.html#54 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2011.html#37 CKD DASD
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010h.html#62 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010e.html#30 SHAREWARE at Its Finest
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 14 Feb 2022 Blog: Facebookre:
Starting in early 80s, had HSDT project, T1 and faster computer links (including 1980 STL chanel-extender mentioned in post using NSC hardware). NCAR had a supercomputer hiearchical filesystem controlled by IBM system with IBM disks. Supercomputers would send requiest to IBM system, IBM system would make sure it was staged to IBM disks, download the appropriate disk channel program to an A515 channel emulator ... and return the "handle" for the channel program to the supercomputer to execute directly. In the move to HIPPI and HIPPI switch ... there were provisions to do similar "3rd party transfers" via HIPPI in support of the NCAR filesystem (and also add "3rd party transfers to FCS switches). I had becomed the IBM expert on NSC gear and branch would periodically call me to help with issues at NCAR.
Was also working with NSF director and was suppose to get $20M to
interconnect NSF supercomputer centers. Then congress cuts the budget,
some other things happen and finally an RFP is released. Preliminary
announce (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP (in part
based on what we already had running), the NSF director tries to help
by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies,
but that just makes the internal politics worse (as did claims that
what we already had running was at least 5yrs ahead of the winning
bid). The winning bid doesn't even install T1 links called for
... they are 440kbit/sec links ... but apparently to make it look like
its meeting the requirements, they install telco multiplexors with T1
trunks (running multiple links/trunk). We periodically ridicule them
that why don't they call it a T5 network (because some of those T1
trunks would in turn be multiplexed over T3 or even T5 trunks). as
regional networks connect in, it becomes the NSFNET backbone,
precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
... while doing high-speed links ... was also working on various processor clusters with national labs (had been doing it on&off dating back to getting con'ed into doing a CDC6600 benchmark on engineering 4341 for national lab that was looking getting 70 for compute farm, sort of leading edge of the coming cluster supercomputing tsunami).
periodically reposted: senior disk engineer getting talk scheduled at internal, world-wide, annual communication group conference in the late 80s, supposedly on 3174 performance, but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had stranglehold on datacenters with its corporate strategic ownership of everything that crossed datacenter wall and was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm. The disk division was seeing drop in disk sales with customers moving to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but the communication group (with their datacenter stranglehold) would veto them.
dumb terminal emulation posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
As work around to internal politics and the communication group, the GPD/ADstar VP of software was investing in distributed computing startups (that would use IBM disks) and would periodically ask us to drop by his investments to offer assistance. One was NCAR spinoff, MESA Archival porting the NCAR supercomputer filesystem to RS/6000 and HA/CMP.
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
other HSDT trivia: also had T1 satellite link between Los Gatos and
Clementi
https://en.wikipedia.org/wiki/Enrico_Clementi
E&anp;S lab in IBM Kingston (this was different and not related to the
Kingston's "supercomputer" effort). His lab had boatload of FPS boxes
(with 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 14 Feb 2022 Blog: Facebookre:
other cray/4341 trivia: as part of communication group fiercely fighting of client/server and distributing computing ... they were fighting hard to prevent release of mainframe tcp/ip support. When they lost that battle, they changed their tactic and said that since they had corporate strategic ownership of everything that crossed datacenter walls. What they released got 44kbytes/sec aggregate using nearly whole 3090 processor. I then added support for RFC1044 and in some tuning tests at Cray Research between a 4341 and Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor (nearly 500 times improvement in bytes moved per instruction executed)
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
disk&controller trivia: when I transfer to San Jose Research in the 70s, I get to wander around most datacenters (IBM and non-IBM) in silicon valley ... including the disk engineering (bldg14) and disk product test (bldg15) across the street. They were doing prescheduled stand-alone 7x24 mainframe testing. They had mentioned that they had recently tried MVS, but it had 15min mean-time-between-failure in that environment. I offered to rewrite the I/O supervisor making it bullet proof and never fail, allowing any amount of ondemand concurrent testing ... greatly improving productivity. Downside was when they had a problem they 1st tried to blame it on my software ... and I had to spend increasingly amount of time playing disk engineer and diagnosing their problems.
I then do a (internal only) research report describing all the work ... also happen to mention the MVS 15min MTBF ... bringing the wrath of the MVS group down on my head (offline told that they tried to have me separated from IBM, when that didn't work they tried ways to make my career unpleasant.)
playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 14 Feb 2022 Blog: Facebookre:
After leaving IBM, I was asked to come into the largest airline res system to look at the impossible things they couldn't do. They start with "routes" which represented about 25% of the total mainframe workload. They give me a tape with the full OAG (all scheduled commercial flt segments in the world) and I go away for two months. I come back with rewritten ROUTES running on RS/6000 ... which they tested for doing all the impossible things. I had started out with about 100 times faster than the mainframe version ... and then doing all the rest of the impossible things slowed it down to only about ten times faster. Basically ten rack mount RS6000/990 could handle all route requests for all commercial airlines in the world (including doing all the impossible things). A decade later, cellphone processors had as much compute power as those ten 990s.
Note after the demo ... they started wringing their hands ... eventually saying they hadn't really wanted me to do all the impossible things ... they just wanted to be able to tell the parent company's board that I was working on it (in prior life, had known one of the board members at IBM). Part of doing the impossible things had automated a bunch of stuff that was being done manually by some 800 people ... which would then be made obsolete. They wouldn't let me at "fares" which represented about 40% of the mainframe workload.
some past posts mentioning ROUTES:
https://www.garlic.com/~lynn/2021f.html#8 Air Traffic System
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2017k.html#63 SABRE after the 7090
https://www.garlic.com/~lynn/2016f.html#109 Airlines Reservation Systems
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 14 Feb 2022 Blog: Facebookre:
IBM Los Gatos VLSI lab was trying to help IBM Burlington VLSI center. Problem was MVT->VS2/SVS->VS2/MVS ... In the transition to MVS they gave every application its own 16mbyte virtual address space. However, OS/360 was heavily pointer-passing API paradigm ... and so an 8mbyte image of the MVS kernel was mapped into every application 16mbyte virtual address (leaving just 8mbytes for applications). Then because subsystems were also mapped into their own (separate) 16mbyte virtual address space ... they had to come up with the Common Segment Area (1mbyte CSA) for being able to pass parameters (where the address is the same for both applications and subsystems); CSA takes 1mbyte leaving only 7mbytes for applications. However, the CSA requirement is somewhat proportional to concurrent applications and subsystems and by 3033 time, many customer CSAs (renamed Common System Area) were 5-6mbytes (and threaten to become eight) leaving only 2-3mbytes for applications program (threatening to become zero).
IBM Burlington had custom MVS systems (168-3 & 3033) that only had one megabyte CSA; dedicated for highly used 7mbyte Fortran VLSI application (although constant problem with any enhancements pushing application over 7mbytes available). Los Gatos did some experimentation porting to VM370/CMS (allowing almost full 16mbytes for application use), but there was problem with the CMS 64kbyte OS simulation implementation, not fully supporting the Burlington VLSI Fortran application. Los Gatos then found that with 12kbytes more of OS simulation code, they could easily run the Burlington VLSI Fortran application and more than double the space available for its runtime execution (easily handling all the deferred enhancements because of lack of execution space under MVS).
recnet posts mentioning MVS common segment/system area:
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2019b.html#25 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2015b.html#60 ou sont les VAXen d'antan, was Variable-Length Instructions that aren't
https://www.garlic.com/~lynn/2015b.html#46 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2015b.html#40 OS/360
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#78 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014i.html#86 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2014d.html#62 Difference between MVS and z / OS systems
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CP-67 Date: 15 Feb 2022 Blog: FacebookCP-67
some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr Project MAC to do MULTICS (operating system
written in PLI).
https://en.wikipedia.org/wiki/Multics
Others went to the IBM science center on 4th flr to do virtual machine
(CP40/CMS & CP67/CMS), internal network, lots of performance tools,
etc. CTSS RUNOFF was redone for CMS as SCRIPT. In 1969, GML was
invented at CSC and GML tag processing added to SCRIPT (a decade later
GML morphs into ISO standard SGML, and after another decade morphs
into HTML at CERN)
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML posts
https://www.garlic.com/~lynn/submain.html#sgml
trivia: CSC wanted a model 50 for the virtual memory hardware changes
... but all the spare 50s were going to FAA ATC project ... so had to
settle for 360/40 for the hardware changes ... and created
cp40/cms. CP40/CMS morphs into CP67/CMS when 360/67 becomes available
standard with virtual memory. Lots more history (including Les' CP40)
https://www.leeandmelindavarian.com/Melinda#VMHist
quote here
https://www.leeandmelindavarian.com/Melinda/25paper.pdf
Since the early time-sharing experiments used base and limit registers
for relocation, they had to roll in and roll out entire programs when
switching users....Virtual memory, with its paging technique, was
expected to reduce significantly the time spent waiting for an
exchange of user programs.
What was most significant was that the commitment to virtual memory
was backed with no successful experience. A system of that period that
had implemented virtual memory was the Ferranti Atlas computer, and
that was known not to be working well. What was frightening is that
nobody who was setting his virtual memory direction at IBM knew why
Atlas didn't work.(23)
... from "CP/40 -- The Origin of VM/370"
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
What was most significant to the CP/40 team was the commitment to
virtual memory was backed with no successful experience. A system of
that period that had implemented virtual memory was the Ferfanti Atlas
computer and that was known to not be working well. What was
frightening is that nobody who was setting this virtual memory
direction at IBM knew why Atlas didn't work.
.... snip ...
CSC delivered CP67 Jan68 to univ. (after CSC itself, and Lincoln Labs) had no page thrashing controls and primitive page replacement algorithm. I added dynamic adaptive page thrashing controls, a highly efficient page replacement algorithm, dynamic adaptive resource management and scheduling ... significantly rewrote lots of code to reduce pathlengths, redid DASD i/o for seek and rotational optimization ... and a few other things.
recent post about some of the path length work
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
part of 68 Share meeting presentation on path length work
https://www.garlic.com/~lynn/94.html#18
also mentions old 2011 post
https://www.garlic.com/~lynn/2011c.html#72
about regiving oct86 presentation that I made at SEAS meeting on
history of VM370 performance
https://www.garlic.com/~lynn/hill0316g.pdf
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page management posts
https://www.garlic.com/~lynn/subtopic.html#clock
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: To Be Or To Do Date: 16 Feb 2022 Blog: FacebookBoyd Quote:
Boyd posts & URLs
https://www.garlic.com/~lynn/subboyd.html
past Rochefort reference
https://www.garlic.com/~lynn/2017i.html#86 WW II cryptography
military-industrial complex
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
past posts with "forgive you for being right" reference
https://www.garlic.com/~lynn/2021d.html#80 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021.html#83 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2015d.html#14 3033 & 3081 question
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013f.html#78 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012d.html#40 Strategy subsumes culture
https://www.garlic.com/~lynn/2010b.html#38 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
https://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
https://www.garlic.com/~lynn/2008m.html#30 Taxes
https://www.garlic.com/~lynn/2003i.html#71 Offshore IT
https://www.garlic.com/~lynn/2002k.html#61 arrogance metrics (Benoits) was: general networking
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev. Date: 16 Feb 2022 Blog: FacebookIBM Cloud to offer Z-series mainframes for first time - albeit for test and dev. z/OS VMs coming sometime in the second half of 2022
The original big internal IBM virtual guest use (cloud) was for HONE ("hands-on network environment"). SE training use to include sort of journeyman as part of large group at customer account. After 23Jun1969 "unbundling" announcement (charging for SE time, software, maint, etc), they couldn't figure out how not to charge for trainee SEs at customer account. Thus was born HONE, branch office SEs working online with guest operating systems at HONE CP67 datacenters around the county. Science Center had also ported APL\360 to CMS for CMS\APL and HONE started making APL-based sales&marketing support applications available on HONE. Eventually sales&marketing applications came to dominate all HONE activity and the SE guest operating system use withered away. trivia: after joining IBM one of my hobbies was enhanced production operating systems for internal datacenters ... and HONE was a long-time customer.
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr Project MAC to do MULTICS (operating system
written in PLI).
https://en.wikipedia.org/wiki/Multics
Others went to the IBM science center on 4th flr to do virtual
machine (CP40/CMS & CP67/CMS), internal network, lots of
performance tools, etc. CTSS RUNOFF was redone for CMS as SCRIPT. In
1969, GML was invented at CSC and GML tag processing added to SCRIPT
(a decade later GML morphs into ISO standard SGML, and after another
decade morphs into HTML at CERN)
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML posts
https://www.garlic.com/~lynn/submain.html#sgml
trivia: CSC wanted a model 50 for the virtual memory hardware changes
... but all the spare 50s were going to FAA ATC project ... so had to
settle for 360/40 for the hardware changes ... and created
cp40/cms. CP40/CMS morphs into CP67/CMS when 360/67 becomes available
standard with virtual memory. Lots more history (including Les' CP40)
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/25paper.pdf
... from "CP/40 -- The Origin of VM/370"
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
What was most significant to the CP/40 team was the commitment to
virtual memory was backed with no successful experience. A system of
that period that had implemented virtual memory was the Ferfanti Atlas
computer and that was known to not be working well. What was
frightening is that nobody who was setting this virtual memory
direction at IBM knew why Atlas didn't work.
The experiments run on the CP/40 system yielded significant results in
area of virtual memory. First we discovered the phenomenon currently
known as thrashing. I first reported it to an internal IBM conference
on Storage Hierarchy in December 1966 (ref. 3).
.... snip ...
CSC delivered CP67 to univ. jan68 (3rd after CSC itself, and Lincoln
Labs) had no (real) page thrashing controls and primitive page
replacement algorithm (and I was asked to be part of IBM announce at
spring SHARE meeting in Houston). I added dynamic adaptive page
thrashing controls, a highly efficient page replacement algorithm,
dynamic adaptive resource management and scheduling ... significantly
rewrote lots of code to reduce pathlengths, redid DASD i/o for seek
and rotational optimization ... and a few other things. Started with
CP67 pathlengths running OS/360 in virtual machines, part of early
SHARE presentation on both OS/360 and CP67 optimization work:
https://www.garlic.com/~lynn/94.html#18
Oct86 SEAS presentation on history of VM370 performance (presented
again at spring2011, washdc "hill gang" user group meeting):
https://www.garlic.com/~lynn/hill0316g.pdf
CSC (and 60s online CP67 commercial spinoffs of the science center) did a lot of work to make systems available 7x24, reduce costs, overhead, ondemand, etc; offshift and dark room operation, operator-less, allow CPU meter to stop when system idle (back when even internal machines where charged lease/rent based on CPU meter, ... trivia: long after IBM had switched to selling machines, MVS still had a 400ms timer request that made sure CPU meter never stopped, even for other wise idle system).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
Equivalent cloud today is that they have so radically reduced computer system costs, the large cloud megadatacenters (with half million or more systems) found that increasingly power&cooling were major expense ... they applied lots of pressure to server chip makers ... greatly improve chip power efficiency as well as allow power to drop to zero when idle but instantly on (for ondemand requirements). Also enormous automation, large megadatacenter with half million or more systems with staff of 80-120 people (rather than staff/system, its number of systems/staff).
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Reimagining Capitalism: Major Philanthropies Launch Effort at Leading Academic Institutions Date: 16 Feb 2022 Blog: FacebookReimagining Capitalism: Major Philanthropies Launch Effort at Leading Academic Institutions
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism
Economists and the Powerful: Convenient Theories, Distorted Facts, Ample Rewards
https://www.amazon.com/Economists-Powerful-Convenient-Distorted-Economics-ebook/dp/B01B4X4KOS/
loc72-74:
"Only through having been caught so blatantly with their noses in the
troughs (e.g. the 2011 Academy Award -- winning documentary Inside
Job) has the American Economic Association finally been forced to
adopt an ethical code, and that code is weak and incomplete compared
with other disciplines."
loc957-62:
The AEA was pushed into action by a damning research report into the
systematic concealment of conflicts of interest by top financial
economists and by a letter from three hundred economists who urged the
association to come up with a code of ethics. Epstein and
Carrick-Hagenbarth (2010) have shown that many highly influential
financial economists in the US hold roles in the private financial
sector, from serving on boards to owning the respective
companies. Many of these have written on financial regulation in the
media or in scholarly papers. Very rarely have they disclosed their
affiliations to the financial industry in their writing or in their
testimony in front of Congress, thus concealing a potential conflict
of interest.
... snip ...
"Inside Job" references how leading economists were captured similar
to the capture of the regulatory agencies.
https://en.wikipedia.org/wiki/Inside_Job_(2010_film)
Age of Greed: The Triumph of Finance and the Decline of America, 1970
to the Present
https://www.amazon.com/Age-Greed-Triumph-Finance-ebook/dp/B004DEPF6I/
regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
West from Appomattox: The Reconstruction of America after the Civil War
https://www.amazon.com/West-Appomattox-Reconstruction-America-after-ebook/dp/B0015R3Q3A/
loc2593-97:
Before the mid-nineteenth century, a company interested in
incorporating had to prove to a state legislature that it was
performing a function that directly promoted the public good. During
the Civil War, this definition shifted. A corporation still had to
perform a public function but was no longer bound by the moral
imperatives imposed on early corporations. This enabled more and more
businesses to incorporate. Incorporation meant that they could sell
stock, which represented a share of the business, on the open market
to raise money.
... snip ...
False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
I Origins of the Corporation. Although the corporate structure dates
back as far as the Greek and Roman Empires, characteristics of the
modern corporation began to appear in England in the mid-thirteenth
century.[4] "Merchant guilds" were loose organizations of merchants
"governed through a council somewhat akin to a board of directors,"
and organized to "achieve a common purpose"[5] that was public in
nature. Indeed, merchant guilds registered with the state and were
approved only if they were "serving national purposes."[6]
... snip ...
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Did Ben Bernanke Implement QE before the 2008 Financial Crisis? Date: 16 Feb 2022 Blog: FacebookDid Ben Bernanke Implement QE before the 2008 Financial Crisis?
Money and Government: The Past and Future of Economics
https://www.amazon.com/Money-Government-Past-Future-Economics-ebook/dp/B07HYYXGHX/
pg256/loc4372-75:
The Fed was quickest off the mark. The need for large-scale QE was the
lesson Ben Bernanke drew from the Friedman and Schwartz story of the
Great Depression. Shortly before he became Chairman of the Federal
Reserve Board in 2006, Bernanke wrote: 'By allowing persistent
declines in the money supply and in the price level, the Federal
Reserve of the late 1920s and 1930s greatly destabilized the
U.S. economy.'
pg256/loc4378-80:
In its initial round of purchases (QE1), between November 2008 and
March 2010, the Fed bought $1.25 trillion of mortgage-backed
securities (MBS), $200 billion of agency debt (issued by the
government-sponsored agencies Fannie Mae and Freddie Mac) and $300
billion of long-term Treasury securities, totalling 12 per cent of the
US's 2009 GDP.
pg256/loc4380-84:
Its second round of purchases (QE2) - $600 billion of long-term
Treasury securities - ran between November 2010 and June 2011, and its
third round (QE3) started in September 2012 with monthly purchases of
agency mortgage-backed securities. 27 The programmes were wound up in
October 2014, by which point the Fed had accumulated an unprecedented
$4.5 trillion worth of assets, 28 equivalent to just over a quarter of
US GDP in 2014.
... snip ...
Supposedly TARP ($700B) was for bank bailout (buying offbook toxic assets), but largest part went to AIG. AIG had been negotiating to pay off CDS "gambling bets" at 50cents on the dollar when SECTREAS stepped in and had AIG sign document that they couldn't sue those making the bets (who had created CDO/MBS designed to fail and then took out CDS bets that they would fail) and to take TARP to pay off at face value. The largest recipient of TARP was AIG and the largest recipient of face-value payoffs was firm formally headed by SECTREAS.
The FED fought a long hard legal battle to keep private that it was doing the real bailout behind the scenes (buying trillions in offbook toxic assets at 98cents on the dollar and providing tens of trillions in ZIRP funds). When the FED lost the legal battle, the FED Chairman called a press conference and said that he had expected the TBTF to use ZIRP to help mainstreet, but when they didn't, he had no way to force them (but that didn't stop ZIRP funds, TBTF were using ZIRP to by treasuries and pocketing the hundreds of billion/year on the spread).
TBTF TARP was just side-show and TBTF could use a little of ZIRP to
repay TARP. Note the FED Chairman supposedly was selected for being a
depression era scholar when the FED tried something similar with the
same results (so he should have had no expectations that the TBTF
would do anything different this time). Account of $9T ZIRP
... eventually grew to $30T.
http://www.csmonitor.com/USA/2010/1201/Federal-Reserve-s-astounding-report-We-loaned-banks-trillions
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
fed chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
Too Big To Fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
(triple A rated) toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
ZIRP posts
https://www.garlic.com/~lynn/submisc.html#zirp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Did Ben Bernanke Implement QE before the 2008 Financial Crisis? Date: 17 Feb 2022 Blog: Facebookre:
In Jan1999 I was asked to help try and stop the economic mess (we failed). I was told that some investment bankers had walked away "clean" from the S&L Crisis ... were then running Internet IPO Mills (invest a few million, hype, IPO for a few billion, needed to fail to leave the field open for the next round of IPOs), and were predicted next to get into securitized loans&mortgages ... 2001-2008 sold more than $27T into the bond market).
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
(triple-A rated) toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
On CNN, Fareed called out political strife and conflict got much worse with speaker Gingrich. In Jan1999, after we were asked to help try and prevent the coming economic mess (we failed). One of the things we were told was that there has always been conflict between the two parties, but they could put their differences aside and come together to do things for the country. Gingrich weaponized the political process, everything came to be about party advantage (the other party had to loose even if it damaged the country), and the level of party conflict and strife got significantly worse.
past posts mentioning Gingrich
https://www.garlic.com/~lynn/2022.html#115 Newt Gingrich started us on the road to ruin. Now, he's back to finish the job
https://www.garlic.com/~lynn/2021f.html#39 'Bipartisanship' Is Dead in Washington
https://www.garlic.com/~lynn/2021e.html#11 George W. Bush Can't Paint His Way Out of Hell
https://www.garlic.com/~lynn/2021d.html#8 A Discourse on Winning and Losing
https://www.garlic.com/~lynn/2021d.html#4 The GOP's Fake Controversy Over Colin Kahl Is Just the Beginning
https://www.garlic.com/~lynn/2021c.html#93 How 'Owning the Libs' Became the GOP's Core Belief
https://www.garlic.com/~lynn/2021c.html#51 In Biden's recovery plan, an overdue rebuke of trickle-down economics
https://www.garlic.com/~lynn/2021.html#29 How the Republican Party Went Feral. Democracy is now threatened by malevolent tribalism
https://www.garlic.com/~lynn/2019c.html#21 Mitch McConnell has done far more to destroy democratic norms than Donald Trump
https://www.garlic.com/~lynn/2019b.html#45 What is ALEC? 'The most effective organization' for conservatives, says Newt Gingrich
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets
https://www.garlic.com/~lynn/2018f.html#40 America's electoral system gives the Republicans advantages over Democrats
https://www.garlic.com/~lynn/2018f.html#28 America's electoral system gives the Republicans advantages over Democrats
https://www.garlic.com/~lynn/2011p.html#136 Gingrich urged yes vote on controversial Medicare bill
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: To Be Or To Do Date: 17 Feb 2022 Blog: Facebookre:
... apparently epidemic in the Navy.
Lessons Not Learned: The U.S. Navy's Status Quo Culture
https://www.amazon.com/Lessons-Not-Learned-Status-Culture-ebook/dp/B00DKMWP2Q/
loc1289-90
fact is frequently played down in the United States that the British
and Canadians, in fact, conducted most of the ASW operations in the
Atlantic.
loc1293-95:
Canada started the war with a navy of only eleven ships, five of which
were minesweepers, and just 1,800 men in the regular Navy, but by the
end had accounted for the destruction or capture of nearly fifty
German submarines. The U.S. Navy began the war with over 337,000
personnel and more than 300 ships.
loc1303-4:
Despite this "less than overwhelming" performance, the U.S. Navy did
not seem to have a clue that the Canadians and British were far more
significant players in the Battle of the Atlantic.
loc1317-18:
Not surprisingly, much of what the American public was told about
U.S. Navy ASW performance in the Atlantic was outright fabrication,
said Regan.
loc1321-22:
Basically, the Navy department began issuing lies. They claimed
twenty-eight U-boats had been sunk off the east coast whereas the
correct figure was nil.
loc1322-24:
Regan summarized that "the Navy PR officers were not so easily
defeated as their anti-submarine operation," 76 in what amounted to a
vast spin campaign to protect negligent senior admirals from public
disgrace and possible dismissal.
... snip ...
military-industrial complex
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
posts mentioning "Lessons Not Learned"
https://www.garlic.com/~lynn/2017f.html#17 5 Naval Battles That Changed History Forever
https://www.garlic.com/~lynn/2015.html#31 channel islands, definitely not the location of LEO
https://www.garlic.com/~lynn/2014i.html#43 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014h.html#18 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2014h.html#3 The Decline and Fall of IBM
https://www.garlic.com/~lynn/2014h.html#1 Lessons Not Learned: The U.S. Navy's Status Quo Culture
https://www.garlic.com/~lynn/2014b.html#90 Can America Win Wars?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dataprocessing Career Date: 17 Feb 2022 Blog: Facebook
Within year after taking two credit intro to fortran/computers ... univ hires me fulltime to be responsible for OS/360 (had been sold 360/67 for tss/360 to replace 709/1401 ... but was running as 360/65 instead). Then before I graduate I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent group to better monetize the investment, including offering services to non-Boeing entities. I thot Renton datacenter possibly largest in the world, couple hundred million in IBM gear, 360/65s arriving faster than they could be installed, boxes constantly staged into hallways around machine room. There were politics between director of Renton and the CFO ... who only had a 360/30 up at boeing field for payroll (although they enlarged the machine room and put in 360/67 for me to play with when I wasn't doing other stuff). When I graduate, I join the IBM cambridge science center (which outbid boeing, commercial spinoffs of the science center, and others). I got to continue going to user group meetings and got to wander around both IBM and customer datacenters (one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters).
history of VM Performance posts
https://www.garlic.com/~lynn/2022b.html#22 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#20 CP-67
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2022.html#93 HSDT Pitches
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021h.html#82 IBM Internal network
https://www.garlic.com/~lynn/2021g.html#46 6-10Oct1986 SEAS
https://www.garlic.com/~lynn/2021e.html#65 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2011e.html#22 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011c.html#72 A History of VM Performance
Director of one of the largest financial datacenters on the east coast liked me to drop by and talk technology. Then their branch manager horribly offended the customer and in retaliation, they ordered an Amdahl system (lonely Amdahl machine in vast sea of blue). IBM then asked me to go spend a year onsite at the customer (to help obfuscate why the customer was ordering the Amdahl machine). I talked it over with the customer and then declined IBM's offer. I was then told, if I didn't do it, I could forget having IBM career, promotions or raises ... the branch manager was a good sailing buddy of IBM's CEO. This was when Amdahl hadn't broken into the true blue commercial market yet (and this would be the first) ... had been selling just into the technical, scientific and university markets.
More than decade after joining IBM, was at San Jose Research and submitted an "Open Door" that I was vastly under paid, with documentation. I got back a written reply back from head of HR ... that said that after detailed examination of my complete career, I was being paid exactly what I was suppose to. I take the original, reply and add a cover ... pointed out that I was being asked to interview coming graduates to work under my technical direction in a new group ... who were being offered starting salary 30% more than I was currently making. I never got a reply, but within a few weeks, I got a 30% raise (putting me on level playing field with what was being offered to the people I was interviewing). One of many times, co-workers reminded me, in IBM, "Business Ethics is an Oxymoron".
IBM business ethics oxymoron" posts
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021c.html#42 IBM Suggestion Program
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021.html#83 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
trivia: In the early 80s, I'm introduced to John Boyd and would
sponsor his briefings at IBM. One of his stories was he had been very
vocal that the electronics across the trail wouldn't work ... possibly
as punishment, he is put in command of "spook base" (about the same
time I'm at Boeing). One of his biographies claim "spook base" was a
$2.5B "windfall" for IBM (ten times Renton). other trivia: in 89/90,
the commandant of the Marine Corps leverages Boyd for a Corps makeover
(at a time that IBM was desperately in need of a makeover). Boyd
passed in 1997, but there have continued to be Boyd meetings at Marine
Corps Univ. Spook base ref gone 404, but lives on at wayback machine
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White
a few posts "spook base" $2.5B windfall for IBM
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2021i.html#89 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#6 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021b.html#19 IBM Recruiting
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#29 Online Computer Conferencing
https://www.garlic.com/~lynn/2019e.html#138 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019e.html#77 Collins radio 1956
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019.html#54 IBM bureaucracy
https://www.garlic.com/~lynn/2018e.html#29 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2018d.html#84 Management Training
https://www.garlic.com/~lynn/2018d.html#0 The Road Not Taken: Edward Lansdale and the American Tragedy in Vietnam
https://www.garlic.com/~lynn/2018b.html#28 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2018.html#39 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017k.html#58 Failures and Resiliency
https://www.garlic.com/~lynn/2017j.html#104 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017j.html#83 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017g.html#60 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017g.html#11 Mainframe Networking problems
https://www.garlic.com/~lynn/2017d.html#90 Old hardware
https://www.garlic.com/~lynn/2017d.html#14 Perry Mason TV show--bugs with micro-electronics
https://www.garlic.com/~lynn/2016f.html#42 Computers
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Early Online Date: 18 Feb 2022 Blog: FacebookTYMSHARE
I cut a deal with TYMSHARE to get a monthly tape with copy of all VMSHARE files for putting up on IBM internal systems and network (had some problem with IBM lawyers concerned that internal employees might be contaminated exposed to customer information).
I would also see them at least at monthly meetings hosted at SLAC ... but would also drop by their place periodically. One visit they demo ADVENTURE that they found on SAIL PDP10, copied to their PDP10 and ported to VM370/CMS.
Starting in the early 80s, I had HSDT project, T1 and faster computer links
... was also working with NSF director and was suppose to get $20M to
interconnect the NSF supercomputer centers ... and I'm making
presentations at NSF (& potential NSF) locations. Then congress cuts
the budget, some other things happen and eventually an RFP is released
(in part based on what we already had running). Preliminary announce
(28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... then internal IBM politics prevent us from bidding on the RFP, the
NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO), with support from other agencies, but that just makes the internal
politics worse (as did claims that what we already had running was at
least 5yrs ahead of the winning bid, RFP awarded 24Nov87). As regional networks connect in,
it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
NSFNET network
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
recent posts mentioning VMSHARE:
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021k.html#98 BITNET XMAS EXEC
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021j.html#48 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021h.html#47 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021h.html#1 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#45 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021c.html#12 Z/VM
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021b.html#81 The Golden Age of computer user groups
https://www.garlic.com/~lynn/2021b.html#69 Fumble Finger Distribution list
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
https://www.garlic.com/~lynn/2019e.html#160 Y2K
https://www.garlic.com/~lynn/2019e.html#87 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2019d.html#66 Facebook Knows More About You Than the CIA
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019b.html#77 IBM downturn
https://www.garlic.com/~lynn/2019b.html#54 Misinformation: anti-vaccine bullshit
https://www.garlic.com/~lynn/2019b.html#24 Online Computer Conferencing
https://www.garlic.com/~lynn/2019b.html#21 Online Computer Conferencing
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev. Date: 19 Feb 2022 Blog: Facebookre:
Worked mostly with Ron Hubb ... starting original HONE CP67 days and before consolidation in Palo Alto. Knew Marty from before his HONE days.
trivia: starting in the late 70s, periodically new DPD execs were horrified to find that HONE was VM/370 based ... and thot they could make their career by having it moved to MVS. All resources were put on the effort for a year or so, it would be declared a success, the person promoted and things return to normal until it is repeated.
Then in the 80s, somebody decides that why HONE couldn't be converted to MVS was because it was running my enhanced production operating system ... and that HONE had to convert to vanilla system (or at least w/o my enhancements, because what would happen if I was hit by a bus) ... which would then enable them being able to convert to MVS.
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
more trivia: PROFS group were picking up internal applications (for wrapping 3720 menus around) and picked up very early version of VMSG for the email client. When the VMSG author tried to offer them a much enhanced version they tried to have him fired (having taken credit for everything in PROFS). The whole thing quiets down when it is demonstrated that the VMSG author's initials are in non-displayed field in every PROFS. After that the VMSG author only shares his source with Marty and me
some past posts mentioning PROFS & VMSG:
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2021j.html#83 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#86 IBM EMAIL
https://www.garlic.com/~lynn/2021i.html#68 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#50 PROFS
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021d.html#48 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#65 IBM Computer Literacy
https://www.garlic.com/~lynn/2021b.html#37 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2019d.html#96 PROFS and Internal Network
https://www.garlic.com/~lynn/2019b.html#20 Internal Telephone Message System
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Online at home Date: 19 Feb 2022 Blog: FacebookHad online 2741 terminal at home starting march 1970 ... summer 1977, upgraded with CDI miniterm (sort of like TI silent 700) along with (portable) microfiche viewer ... plant site had microfiche printer ... could route printed output to it ... and have results within day. Had some manuals ... but mostly assembler listings.
litigation resulted in 23jun1969 unbundling announcement ... started (separate) charging for (application) software, SE services, maint. etc (but managed to make the case that kernel software should still be free. in the early 70s there was the "Future System" effort ... replace all 370s with something completely different (internal politics was killing off 370 projects ... and the lack of new 370s is credited with giving clone 370 makers their market foothold). When FS implodes, there is mad rush to get stuff back into the 370 product pipeline.
posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle
posts mentioning futute system
https://www.garlic.com/~lynn/submain.html#futuresys
Clone 370s also motivation to start charging for kernel software ... initially kernel addons ... on transition to charging for all software ... and my dynamic adaptive resource manager (addon) was initial guinea pig (got to spend time with business planners and lawyers, aka one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters).
Early 80s, transition was complete and the OCO-wars (object code only)
began ... can see some discussion in the VMSHARE archives (In Aug1976,
TYMSHARE started offering their CMS-based online computer conferencing
free to the SHARE organization)
http://vm.marist.edu/~vmshare
dynamic adaptive resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Interop 88 Date: 19 Feb 2022 Blog: FacebookAt Interop '88, I had PC/RT with megapel display in (not IBM) booth in central court at right angles to the SUN booth. Case was at the SUN booth with SNMP, convinced him to come over and install SNMP on the PC/RT.
Weekend before start ... floor nets were crashing ... was one of the
first times that there were large number of machines attached to all
floor nets ... and acting as gateways ... led to requirement in
RFC1122 that all machines must default to ipforwarding off, pg28&29:
A host that forwards datagrams generated by another host is acting as
a gateway and MUST also meet the specifications laid out in the
gateway requirements RFC [INTRO:2]. An Internet host that includes
embedded gateway code MUST have a configuration switch to disable the
gateway function, and this switch MUST default to the non-gateway
mode. In this mode, a datagram arriving through one interface will
not be forwarded to another host or gateway (unless it is
source-routed), regardless of whether the host is single-homed or
multihomed. The host software MUST NOT automatically move into
gateway mode if the host has more than one interface, as the operator
of the machine may neither want to provide that service nor be
competent to do so.
... snip ...
interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev. Date: 19 Feb 2022 Blog: Facebookre:
One of the possible reasons IBM was down on my case (including eliminating internal datacenters running my enhanced systems) ... was in the late 70s and early 80s had been blamed for online computer conferencing on the internal network. It really took off spring 1981 when I distributed trip report of visit to Jim Gray at Tandem ... only about 300 were active ... but claims that upwards of 25,000 were reading ... folklore is that when corporate executive committee was told about it, 5of6 wanted to fire me.
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
Starting in the early 80s, I had HSDT project, T1 and faster computer
links, including working with the NSF director and was suppose to get
$20M to interconnect the NSF supercomputer centers ... then congress
cuts the budget, some other things happen and eventually release RFP
(in part based on what we already had running). Preliminary announce
(28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP (possibly
contributing was the online computer conferencing), the NSF director
tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse (as did
claims that what we already had running was at least 5yrs ahead of the
winning bid). as regional networks connect in, it becomes the NSFNET
backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Later I'm also doing IBM's HA/CMP product (I had renamed it from
HA/6000 when started doing technical/scientific cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors). Old
post referencing Jan1992 meeting with Oracle CEO on cluster scale-up
(16way by mid92, 128way by ye92)
https://www.garlic.com/~lynn/95.html#13
within a few weeeks, cluster scale-up is transferred, announced as IBM supercomputer and we are told we can't work on anything with more than four processors. We leave IBM a few months later.
Computerworld news 17feb1992 (from wayback machine) ... IBM establishes
laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
17feb92 ibm supercomputer press ... for scientific/technical *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
11May92 press, cluster supercomputing caught IBM by *SURPRISE*
https://www.garlic.com/~lynn/2001n.html#6000clusters2
15Jun1992 press, cluster computers, mentions IBM plans to
"demonstrate" mainframe 32-microprocessor later 1992, is that
tightly-coupled or loosely-coupled?
https://www.garlic.com/~lynn/2001n.html#6000clusters3
z900 16-processors not until 2000; z990 32-processors 2003. I've periodically mentioned got involved in 16-way (tightly-coupled) 370 mainframe in the 70s, and we con'ed the 3033 processor engineers to work on it in their spare time (lot more interesting than remapping 168 to 20% faster chips). Everybody thought it was great until somebody told head of POK that it could be decades before POK favorite son operating system (MVS) had (effective) 16-way support. Then some of us were invited to never visit POK again, and the 3033 processors engineers directed to stop being distracted.
trivia: over the years, there has been significant overlap in cluster scale-up technology with the cloud megadatacenters and cluster supercomputing.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
SMP posts
https://www.garlic.com/~lynn/subtopic.html#smp
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 3270 Terminals Date: 19 Feb 2022 Blog: FacebookWhen 3274/3278 came out it was much slower than 3272/3277 ... they had moved lots of the terminal electronics back into the (shared) 3274 controller ... which significantly drove up the coax protocol chatter & latency (cut 3278 manufacturing costs). 3272/3277 had .086sec hardware response (needed at least .164 system response for human to see quarter sec. response, back in the early 80s when the interactive computing productivity studies were all the rage). 3274/3278 had .3-.5+ sec response (depending on data). Complaints sent to 3278 product administrator about it being much worse for interactive computing ... got back response that 3278s were intended for "data entry" (i.e. electronic keypunch).
Later IBM/PC 3277 emulation card had 3-4 times higher upload/download than 3278 emulation cards.
some recent 3272/3277 posts
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#92 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
From long ago and far away:
Date: 01/17/86 12:37:14
From: wheeler
To: (losgatos distribution)
I was in YKT this week & visited xxxxx yyyy. He is shipping me two
PCCAs now ... since I couldn't remember the address out here ... he is
sending them care of zzzzz. The demo they had with PCCA on PCNET was
various host connections was quite impressive, both terminal sessions
and file transfer. Terminal sessions supported going "both ways"
... PVM from PCDOS over PCNET to AT with PCCA, into 370 PVM and using
PVM internal net to log on anywhere. A version of MYTE with NETBIOS
support is used on the local PC machine. They claim end-to-end data
rate of only 70kbytes per second now ... attributed to bottlenecks
associated with NETBIOS programming. They could significantly improve
that with bypassing NETBIOS and/or going to faster PC-to-PC
interconnect (token ring, ethernet, etc). However, 70kbytes/sec is
still significantly better than the 15kbytes/sec that Myte gets using
TCA support thru 3274.
... snip ... top of post, old email index
I also have copy of PCCA (channel attach interface, also evolves into 8232 for TCP/IP) announcement email send out to 1APR1985 (not april fools) ... to small interested party distribution list.
Note AWD did their own 4mbit T/R card for the PC/RT (AT-bus). For the RS/6000 microchannel, AWD was forced to use the heavily kneecapped PS2 microchannel cards (the PS2 16mbit T/R microchannel card had lower card throughput than the PC/RT 4mbit T/R AT-bus card).
some other recent posts mentiong PC/RT 4mbit T/R card
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021j.html#49 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#70 IBM MYTE
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#9 IBM Kneecapping products
https://www.garlic.com/~lynn/2019e.html#139 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019b.html#79 IBM downturn
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn
https://www.garlic.com/~lynn/2019.html#72 Token-Ring
https://www.garlic.com/~lynn/2019.html#66 Token-Ring
https://www.garlic.com/~lynn/2019.html#54 IBM bureaucracy
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2018e.html#103 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#24 8088 and 68k, where it went wrong
The communication group was fiercely fighting off client/server and distributed computing and also trying to prevent mainframe TCP/IP from being announced. When they lost, they changed their tactic and claimed that since they had corporation strategic (stranglehold) responsibility for everything that crossed datacenter walls, TCP/IP had to be release through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did the "fixes" to support RFC1044 and in some tuning tests at Cray Research, between a 4341 and a Cray, got 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev. Date: 19 Feb 2022 Blog: Facebookre:
In Aug1976, Tymshare
https://en.wikipedia.org/wiki/Tymshare
started offering its CMS-based online computer conferencing "free" to
user group SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives.
http://vm.marist.edu/~vmshare
I had cut a deal with Tymshare to get monthly tape dump of all VMSHARE
(and later PCSHARE) files for putting up on internal network and
systems ... including HONE. email from long ago and far away
Date: 6 March 1986, 15:04:45 PST
From: <long time person at HONE>
To: wheeler
A few months ago HONE brought up Complex E. This complex is composed
of HONE31, HONE32, HONE33, and HONE39 (another HONE development
machine). Seems no one told you about it and the VMSHARE database has
not been updated and we received a complaint from the field.
... snip ... top of post, old email index, HONE email
posts mentionin HONE
https://www.garlic.com/~lynn/subtopic.html#hone
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
recent posts mentioning vmshare
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021k.html#98 BITNET XMAS EXEC
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021j.html#48 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#99 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021h.html#47 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021h.html#1 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#45 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021c.html#12 Z/VM
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021b.html#81 The Golden Age of computer user groups
https://www.garlic.com/~lynn/2021b.html#69 Fumble Finger Distribution list
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dataprocessing Career Date: 20 Feb 2022 Blog: Facebookre:
... some more detail/repeat, My father died when I was in junior high ... I was oldest and constantly needed jobs. In high school I worked for the local hardware store ... which would loan me out to local contractors. Made enough to start university after high school graduation (and washing dishes during school year). Summer after freshman year was foreman on construction job (had three nine-person crews, it was really wet spring and they were way behind schedule, quickly were doing 80+hr weeks).
Sophomore year took two semester hour intro to fortran/comupters. After end of semester got a student programming job ... rewrite 1401 MPIO (tape<->unit record, reader, printer, punch) in assembler for 360/30. Given a bunch of manuals and got to design and implement my own monitor, dispatching, interrupt handlers, device drivers, error recovery, storage management, etc. The univ. shutdown datacenter over the weekend and I had the whole place to myself, although 48hrs w/o sleep could make monday morning classes difficult. Within a few weeks, I had 2000 card assembler program.
Then within a year of intro class, was hired fulltime responsible for os/360 (the univ. had been sold a 360/67 for tss/360 replacing 709/1401, tss/360 never came to production fruition, so ran as 360/65 with os/360).
Before I graduate, was hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities.
I thought Renton datacenter was possibly largest, couple hundred million in 360 systems, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between director of Renton and CFO ... who only had a 360/30 for payroll ... although they enlarged it, putting in a 360/67 for me to play with when I wasn't doing other stuff. When I graduate, I join IBM science center (instead of staying at Boeing).
re:
https://www.garlic.com/~lynn/2022b.html#10 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022b.html#11 Seattle Dataprocessing
other Seattle related
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
When I joined IBM, it was going through rapid growth and after not too long, I was asked to be a manager. I asked to take the manager's manual home over the weekend to read. Monday I told them I wouldn't make a good IBM manager, my experience dealing with employees was in the parking lot. I was never asked again.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Suisse Secrets Date: 21 Feb 2022 Blog: FacebookSuisse Secrets
Suisse-Secrets: The Statement of the Source
https://www.sueddeutsche.de/wirtschaft/suisse-secrets-statement-switzerland-source-1.5532520
What is the Suisse secrets leak and why are we publishing it?
https://www.theguardian.com/news/2022/feb/20/suisse-secrets-leak-financial-crime-public-interest
Revealed: Credit Suisse leak unmasks criminals, fraudsters and corrupt
politicians
https://www.theguardian.com/news/2022/feb/20/credit-suisse-secrets-leak-unmasks-criminals-fraudsters-corrupt-politicians
Crooks, kleptocrats and crisis: a timeline of Credit Suisse scandals
https://www.theguardian.com/news/2022/feb/21/tax-timeline-credit-suisse-scandals
Data Leak Exposes How a Swiss Bank Served Strongmen and Spies
https://www.nytimes.com/2022/02/20/business/credit-suisse-leak-swiss-bank.html
Investigation claims Credit Suisse handled dirty money
https://www.oodaloop.com/briefs/2022/02/21/investigation-claims-credit-suisse-handled-dirty-money/
Banking World Rocked After Leak Exposes 18,000 Credit Suisse Accounts
https://www.oodaloop.com/briefs/2022/02/21/banking-world-rocked-after-leak-exposes-18000-credit-suisse-accounts/
whistleblower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
libor possts
https://www.garlic.com/~lynn/submisc.html#libor
tax evasion, fraud, avoidance, havens, etc
https://www.garlic.com/~lynn/submisc.html#tax.evasion
specific past posts mentioning Suisse
https://www.garlic.com/~lynn/2021k.html#24 The System: Who Rigged It, How We Fix It
https://www.garlic.com/~lynn/2017i.html#39 Toys R Us: Another Private Equity Casualty
https://www.garlic.com/~lynn/2014g.html#58 Credit Suisse, BNP Paribas at Risk of Criminal Charges Over Taxes, Business With Banned Nations
https://www.garlic.com/~lynn/2014g.html#33 Credit Suisse's Guilty Plea: The WSJ Uses the Right Adjective to Modify the Wrong Noun
https://www.garlic.com/~lynn/2014g.html#1 HFT is harmful, say US market participants
https://www.garlic.com/~lynn/2014f.html#59 GAO and Wall Street Journal Whitewash Huge Criminal Bank Frauds
https://www.garlic.com/~lynn/2014d.html#53 FDIC Sues 16 Big Banks That Set Key Rate
https://www.garlic.com/~lynn/2014c.html#98 Credit Suisse 'cloak-and-dagger' tactics cost US taxpayers billions
https://www.garlic.com/~lynn/2013k.html#78 Libor Rate-Probe Spotlight Shines on Higher-Ups
https://www.garlic.com/~lynn/2013k.html#71 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2012m.html#20 Hundreds Of Billions Of Dollars Expected To Be Withdrawn From Swiss Banks Amid Tax Evasion Crackdown
https://www.garlic.com/~lynn/2011g.html#30 Bank email archives thrown open in financial crash report
https://www.garlic.com/~lynn/2010l.html#44 PROP instead of POPS, PoO, et al
https://www.garlic.com/~lynn/2009d.html#9 HSBC is expected to announce a profit, which is good, what did they do differently?
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2007u.html#19 Distributed Computing
https://www.garlic.com/~lynn/2007p.html#28 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/aepay12.htm#25 Cyber Security In The Financial Services Sector
https://www.garlic.com/~lynn/aepay10.htm#49 Finance firms push messaging standards
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Leadership Date: 21 Feb 2022 Blog: FacebookFerguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
Organic Design for Command and Control
https://pdfs.semanticscholar.org/6ca9/63358751c859d7b68736aca1aa9d1a8d4e53.pdf
Patterns of Conflict
http://www.projectwhitehorse.com/pdfs/boyd/patterns%20of%20conflict.pdf
also
https://en.wikipedia.org/wiki/Patterns_of_Conflict
trivia: 1989/1990, the Commandant of the Marine Corps leverages Boyd for a makeover of the Corps (at a time when IBM was desperately in need of makeover)
there have continued to be "Boyd" meetings at Marine Corps Univ. in
Quantico ... even after Boyd passed in 1997. Some amount of discussion
regarding "Mission Command" (vis-a-vis "command and control")
... provide intention not detailed orders.
https://en.wikipedia.org/wiki/Mission_command
https://en.wikipedia.org/wiki/Mission-type_tactics
https://smallwarsjournal.com/jrnl/art/how-germans-defined-auftragstaktik-what-mission-command-and-not
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Boyd posts & URLs
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Security Date: 21 Feb 2022 Blog: FacebookWhen I was undergraduate and doing stuff on CP67, IBM was suggesting things that I might work on, in retrospect some plausibly originated with gov. guys ... which I didn't learn about until much later, after joining the science center. CSC had also ported APL\360 to CP67/CMS for CMS\APL with fixes and enhancements. One of the early remote users were business planners in Armonk ... which loaded the most valuable IBM information on the CSC CP67 ... used for implementing APL-based business model. We had to demonstrate very strong security ... in part because there were also professors, staff, and students from Boston/Cambridge area institutions using the same system. Sometime later, IBM got a new CSO (had come from gov. service, at one time head of presidential detail) and I was asked to run around with him talking about computer security (... while some physical security rubbed off on me). other old ref, gone 404, but lives on at wayback machine
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
had little or no cooperation with other product groups. After leaving IBM, was brought in as consultant to a small client/server startup. Two of the former Oracle people (that we had worked with on cluster scale-up for HA/CMP ... aka 128 processors, before it was transferred, announced as IBM supercomputer and we were told we couldn't work on anything with more than four processors), were there responsible for something called "commerce server" and wanted to do payment transactions, the startup had also invented this technology they called "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had absolute authority over everything between webservers and the payment networks, but could only make recommendations on the browser/client side ... some of which were almost immediately violated.
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet gateways to payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
https://www.garlic.com/~lynn/subintegrity.html#payments
... in dealing with ISPs about possible security measures ... conjectured that they may have been resistant because if they don't do anything, they aren't responsible ... if they do something & it isn't 100% absolutely perfect ... they might be sued (for not having prevented something).
I did a talk about "Why Internet Isn't Business Critical
Dataprocessing" (based on compensating processes and software than I
had to do for e-commerce) that Postel (IETF Internet Standards editor)
sponsored.
https://en.wikipedia.org/wiki/Jon_Postel
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
... also served on financial industry standards body ... mostly involving security, as well as financial industry critical infrastructure protection meetings in the white house annex ... one of the issues in the (security) standards, financial entities have advantage when sued if they can show they meet industry standards (if they don't meet standards, the institution has burden of proof they weren't at fault, while if they do meet standards, the other party has burden of proof to show institution was at fault).
some posts on x9 standards and security
https://www.garlic.com/~lynn/x959.html#aads
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959
was also brought in to cal. state to help wordsmith some legislation, at the time they were working on "electronic signature", data breach notification" (first in the nation), and "opt-in personal information sharing". There were some other organization heavily involved in privacy issues, having done detailed public privacy surveys and found the #1 issue was "identity theft", primarily data breaches involving information that can be used in fraudulent financial transactions. Normally entities take security measures in self protection/interest ... but in the case of the data breaches, little was being done, i.e. in most of the cases, the institutions weren't at risk ... it was the public, consumers, economy, gov, etc (everybody but the institution). It was hoped that publicity from the breaches might motivate institutions to take corrective action.
electronic signature posts
https://www.garlic.com/~lynn/subpubkey.html#signature
identity theft posts
https://www.garlic.com/~lynn/submisc.html#identity.theft
data breach notification posts
https://www.garlic.com/~lynn/submisc.html#data.breach.notification.notification
specific posts mentioning burden of proof
https://www.garlic.com/~lynn/2019e.html#51 Big Pharma CEO: 'We're in Business of Shareholder Profit, Not Helping The Sick
https://www.garlic.com/~lynn/2019e.html#32 Milton Friedman's "Shareholder" Theory Was Wrong
https://www.garlic.com/~lynn/2019e.html#14 Chicago Theory
https://www.garlic.com/~lynn/2018c.html#53 EMV: Why the US migration didn't happen sooner
https://www.garlic.com/~lynn/2016h.html#104 PC Compromise and Internet Transactions
https://www.garlic.com/~lynn/2015f.html#17 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015f.html#7 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015f.html#6 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015d.html#65 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014l.html#39 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014k.html#43 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014h.html#67 Sale receipt--obligatory?
https://www.garlic.com/~lynn/2014b.html#69 Why is the US a decade behind Europe on 'chip and pin' cards?
https://www.garlic.com/~lynn/2013o.html#60 Target Offers Free Credit Monitoring Following Security Breach
https://www.garlic.com/~lynn/2013o.html#58 US a laggard in adopting more secure credit cards
https://www.garlic.com/~lynn/2013m.html#20 Steve B sees what investors think
https://www.garlic.com/~lynn/2013m.html#17 Steve B sees what investors think
https://www.garlic.com/~lynn/2013j.html#90 copyright protection/Doug Englebart
https://www.garlic.com/~lynn/2013j.html#52 U.S. agents 'got lucky' pursuing accused Russia master hackers
https://www.garlic.com/~lynn/2013g.html#38 regulation,bridges,streams
https://www.garlic.com/~lynn/2013f.html#8 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013e.html#10 The Knowledge Economy Two Classes of Workers
https://www.garlic.com/~lynn/2012j.html#35 The Conceptual ATM program
https://www.garlic.com/~lynn/2012d.html#62 Gordon Gekko Says
https://www.garlic.com/~lynn/2012b.html#71 Password shortcomings
https://www.garlic.com/~lynn/2011b.html#60 A Two Way Non-repudiation Contract Exchange Scheme
https://www.garlic.com/~lynn/2010m.html#77 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
https://www.garlic.com/~lynn/2010m.html#23 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010l.html#82 Five Theses on Security Protocols
https://www.garlic.com/~lynn/2010k.html#7 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010i.html#63 Wal-Mart to support smartcard payments
https://www.garlic.com/~lynn/2010d.html#47 Industry groups leap to Chip and PIN's defence
https://www.garlic.com/~lynn/2010d.html#24 Cambridge researchers show Chip and PIN system vulnerable to fraud
https://www.garlic.com/~lynn/2010d.html#21 Credit card data security: Who's responsible?
https://www.garlic.com/~lynn/2010b.html#3 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010b.html#1 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2009r.html#72 Why don't people use certificate-based access authentication?
https://www.garlic.com/~lynn/2009n.html#71 Sophisticated cybercrooks cracking bank security efforts
https://www.garlic.com/~lynn/2009i.html#52 Credit cards
https://www.garlic.com/~lynn/2009g.html#62 Solving password problems one at a time, Re: The password-reset paradox
https://www.garlic.com/~lynn/2007j.html#67 open source voting
https://www.garlic.com/~lynn/2007i.html#23 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007h.html#28 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006e.html#8 Beginner's Pubkey Crypto Question
https://www.garlic.com/~lynn/2006d.html#32 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2005o.html#26 How good is TEA, REALLY?
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005m.html#6 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005e.html#41 xml-security vs. native security
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#59 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2000g.html#34 does CA need the proof of acceptance of key binding ?
https://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/aepay10.htm#72 Invisible Ink, E-signatures slow to broadly catch on
https://www.garlic.com/~lynn/aadsm28.htm#38 The Trouble with Threat Modelling
https://www.garlic.com/~lynn/aadsm26.htm#63 Public key encrypt-then-sign or sign-then-encrypt?
https://www.garlic.com/~lynn/aadsm26.htm#60 crypto component services - is there a market?
https://www.garlic.com/~lynn/aadsm23.htm#33 Chip-and-Pin terminals were replaced by "repairworkers"?
https://www.garlic.com/~lynn/aadsm23.htm#14 Shifting the Burden - legal tactics from the contracts world
https://www.garlic.com/~lynn/aadsm21.htm#35 [Clips] Banks Seek Better Online-Security Tools
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm19.htm#33 Digital signatures have a big problem with meaning
https://www.garlic.com/~lynn/aadsm18.htm#55 MD5 collision in X509 certificates
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm6.htm#terror7 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#nonreput Sender and receiver non-repudiation
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: LANs and IBM Token-Ring Date: 22 Feb 2022 Blog: FacebookAWD did their own 4mbit token-ring card for PC/RT (pc/at bus). Then communication group was fiercely fighting off client/server and distributed computing and were severely kneecapping the PS/2 microchannel cards. For rs/6000 microchannel, AWD was told they couldn't do their own cards, but had to use PS2 (kneecapped) cards ... the 16mbit T/R microchannel card had lower throughput than the pc/rt 4mbit T/R card.
801/risc, pc/rt, rs/6000, etc posts
https://www.garlic.com/~lynn/subtopic.html#801
New Almaden bldg was heavily provisioned with CAT4 assuming t/r ... but found that CAT4 10mbit enet had higher card thruput, higher aggregate network throughput, and lower network latency (than 16mbit t/r).
IBM 16mbit T/R card was $800, $69 CAT4 10mbit enet (pc/at bus) regularly sustained 8.5mbit/sec. IBM Dallas e/s center published enet comparison ... I could only account for their numbers if they used original 3mbit prototype (before "listen before transmit" standard)
Claim was ibm originally did (terminal) t/r (CAT4) because end-to-end datacenter to all desktop 3270 coax was starting to exceed (some) bldg weight load limits
About same time as IBM Dallas E/S LAN paper ... there was acm sigcomm paper studied CAT4 enet10 ... normal card and aggregate network thruput was 8.5mbit for 30 stations. However, if they put all stations into low level device driver loop constantly transmitting minimum sized packets ... effective aggregate thruput drops of to 8mbit/sec
3-tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: LANs and IBM Token-Ring Date: 22 Feb 2022 Blog: Facebookre:
a little topic drift, 1980, STL was bursting at the seams and were moving 300 from the IMS group was being moved to offsite bldg. They tested "remote 3270" (telco leased lines 3270 controllers at offsite bldg) and they found the human factors were totally unacceptable (compared to the "in house" channel attached 3270 controllers ... that were connected to VM370 with better than quarter second system response ... not MVS/TSO ... which was lucky to ever have even one second response). I get con'ed into doing channel-extender so they can put the channel-attached 3270 controllers at the offsite bldg ... with no perceptible difference in response & human factors, between offsite and in STL.
An unanticipated side effect was their (vm370) system throughput increased 10-15%. The 3270 controllers previously had been spread across all the 168-3 channels shared with disks ... the channel-extender box had significantly lower channel busy for the same amount of 3270 terminal data (compared to IBM 3270 channel attached terminal controllers) ... running all the 3270 terminal traffic with significantly lower channel busy, increased disk throughput and therefor overall system throughput. There was then suggestion that they put all in-bldg 3270 controllers on channel-extenders (not for distance but reducing channel busy and increasing disk throughput).
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Security Date: 22 Feb 2022 Blog: Facebookre:
1996 MDC held at Moscone (san fran), all the banners said "Internet", but the constant refrain in all the sessions was "protect your investment" ... visual basic code embedded in data files that was automatically executed) .. presentation for computer/data illiterate, originated in small, closed business networks ... but was being extended to the wild anarchy of the Internet ... with no additional protections. Eventually countermeasure was to scan data files for recognizable malware patterns ... problem was the malware patterns increased faster than the recognition capability ... quickly exceeding million with no slowing down.
internet
https://www.garlic.com/~lynn/subnetwork.html#internet
I had worked with Jim Gray at san jose research (system/R, original RDBMS), he then left for Tandem, DEC and then ran sanfran msoft research. After turn of century (before he and his boat disappears), he cons me into interviewing for chief security architect in redmond ... the interview dragged on for couple weeks ... but we never came to agreement.
RDBMS & System/r posts
https://www.garlic.com/~lynn/submain.html#systemr
past MDC Moscone posts
https://www.garlic.com/~lynn/2021e.html#92 Anti-virus
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2017h.html#76 Any definitive reference for why the PDP-11 was little-endian?
https://www.garlic.com/~lynn/2017g.html#84 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2016e.html#35 How the internet was invented
https://www.garlic.com/~lynn/2016e.html#19 Is it a lost cause?
https://www.garlic.com/~lynn/2014h.html#23 weird trivia
https://www.garlic.com/~lynn/2014f.html#11 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014f.html#10 It's all K&R's fault
https://www.garlic.com/~lynn/2012.html#93 Where are all the old tech workers?
https://www.garlic.com/~lynn/2012.html#81 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2011f.html#57 Are Tablets a Passing Fad?
https://www.garlic.com/~lynn/2011f.html#15 Identifying Latest zOS Fixes
https://www.garlic.com/~lynn/2011d.html#58 IBM and the Computer Revolution
https://www.garlic.com/~lynn/2011c.html#50 IBM and the Computer Revolution
https://www.garlic.com/~lynn/2010p.html#40 The Great Cyberheist
https://www.garlic.com/~lynn/2010p.html#9 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
https://www.garlic.com/~lynn/2010j.html#36 Favourite computer history books?
https://www.garlic.com/~lynn/2008r.html#26 realtors (and GM, too!)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Security Date: 22 Feb 2022 Blog: Facebookre:
Mid-90s financial conferences, dialup consumer banking operations were given presentations on why they were moving to the internet ... significant customer support costs .... 60+ different serial port modem drivers, platforms, releases, modem brands, etc ... as well as installing after market serial port modems bricking systems ... all get offloaded on ISPs.
At same time commercial/cash-management dialup operations were saying they would never move to internet because of all the vulnerabilities.
dial-up banking posts
https://www.garlic.com/~lynn/submisc.html#dialup-banking
internet
https://www.garlic.com/~lynn/subnetwork.html#internet
Turn of the century, there were several institutions coming up with chipcards/hardware tokens for internet security ... one of the institutions got deal on obsolete chipcard readers (USB was replacing serial port because of enormous problems) and distributing for free to their customers. Then the customer support problems began exploding ... all institutional knowledge about serial port support problems had evaporated in a couple years. Rapidly spreading in the industry was that chipcards weren't practical in the consumer market (even tho it was serial port chip card reader, not chipcards) ... and the secure programs were being shutdown.
We manage to pull together some institutions and Microsoft security teams to discuss the programs being shutdown based on inaccurate information ... conclusion was that it wouldn't be possible to reverse the damage.
... damage also appeared to do in EU FINREAD ... some posts
https://www.garlic.com/~lynn/subintegrity.html#finread
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: LANs and IBM Token-Ring Date: 23 Feb 2022 Blog: Facebookre:
... aka start of the period of studies showing 1/4 sec response improved human productivity ... and MVS/TSO systems rarely achieved even one second response ... so IBM had papers concurrently some claiming response made little difference and studies showing that it made a lot of difference.
and from IBM Jargon:
bad response - n. A delay in the response time to a trivial request of
a computer that is longer than two tenths of one second. In the 1970s,
IBM 3277 display terminals attached to quite small System/360 machines
could service up to 19 interruptions every second from a user I
measured it myself. Today, this kind of response time is considered
impossible or unachievable, even though work by Doherty, Thadhani, and
others has shown that human productivity and satisfaction are almost
linearly inversely proportional to computer response time. It is hoped
(but not expected) that the definition of Bad Response will drop below
one tenth of a second by 1990.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Security Date: 23 Feb 2022 Blog: Facebookre:
IBM helping with (easily exploited) chipcard deployment in UK in the
90s, url gone 404, but lives on at wayback machine
https://web.archive.org/web/20061106193736/http://www-03.ibm.com/industries/financialservices/doc/content/solution/1026217103.html
and then a couple years later, there was large pilot deployment on US
east coast. I pointed out the cloning vulnerability ... but they were
so microfocused on lost/stolen countermeasures ... they didn't seem to
understand. trip report at cartes 2002 that references presentation
how simple it was to clone (last paragraph) ... note I wasn't able to
make the conference, but the presenters came by later to give me a
copy
https://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html
at 2003 ATM Integrity Task Force (i.e. like in cash machines), a Federal LEO gave detailed presentation on how easy it was to make "Yes Cards" clones and how they could be used for fraud ... somebody in the audience (not me) exclaimed "so they managed to spend billions of dollars to prove that chipcards are less secure than magstripe cards". By this time, all evidence of the large east coast pilot had evaporated and expectation it wouldn't be tried again in the US for a long time (until the vulnerabilities & exploits had been worked out in other jurisdictions).
disclaimer: I had already done a chip that had significant higher security at significantly lower cost and had none of their vulnerabilities (downside was that it commoditized payment transactions, disruptive technology, threat to existing status quo and many of the existing stakeholders).
"yes card" posts
https://www.garlic.com/~lynn/subintegrity.html#yescard
data breach posts
https://www.garlic.com/~lynn/submisc.html#data.breach.notification.notification
past posts mentioning ATM Integrity Task Force
https://www.garlic.com/~lynn/2021k.html#17 Data Breach
https://www.garlic.com/~lynn/2021g.html#81 EMV migration compliance for banking from IBM (1997)
https://www.garlic.com/~lynn/2018f.html#106 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018e.html#58 Watch Your Debit and Credit Cards: Thieves Get Craftier With Skimmers
https://www.garlic.com/~lynn/2018c.html#53 EMV: Why the US migration didn't happen sooner
https://www.garlic.com/~lynn/2018c.html#51 EMV: Why the US migration didn't happen sooner
https://www.garlic.com/~lynn/2017i.html#44 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017i.html#42 Commercial grade ink and paper (Western Union)
https://www.garlic.com/~lynn/2017h.html#4 chip card
https://www.garlic.com/~lynn/2017g.html#80 Running unsupported is dangerous was Re: AW: Re: LE strikes again
https://www.garlic.com/~lynn/2017g.html#77 TRAX manual set for sale
https://www.garlic.com/~lynn/2016f.html#60 Funny error messages
https://www.garlic.com/~lynn/2016f.html#34 The chip card transition in the US has been a disaster
https://www.garlic.com/~lynn/2016e.html#74 The chip card transition in the US has been a disaster
https://www.garlic.com/~lynn/2016e.html#73 The chip card transition in the US has been a disaster
https://www.garlic.com/~lynn/2016c.html#55 Institutional Memory and Two-factor Authentication
https://www.garlic.com/~lynn/2015h.html#74 Were you at SHARE in Seattle? Watch your credit card statements!
https://www.garlic.com/~lynn/2015d.html#61 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015.html#5 NYT on Sony hacking
https://www.garlic.com/~lynn/2014l.html#39 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014k.html#51 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014k.html#42 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014h.html#67 Sale receipt--obligatory?
https://www.garlic.com/~lynn/2014f.html#79 EMV
https://www.garlic.com/~lynn/2014f.html#17 Online Debit, Credit Fraud Will Soon Get Much Worse
https://www.garlic.com/~lynn/2013m.html#20 Steve B sees what investors think
https://www.garlic.com/~lynn/2013m.html#17 Steve B sees what investors think
https://www.garlic.com/~lynn/2013f.html#59 Crypto Facility performance
https://www.garlic.com/~lynn/2012o.html#50 What will contactless payment do to security?
https://www.garlic.com/~lynn/2012j.html#35 The Conceptual ATM program
https://www.garlic.com/~lynn/2012g.html#33 Cartons of Punch Cards
https://www.garlic.com/~lynn/2012b.html#71 Password shortcomings
https://www.garlic.com/~lynn/2012b.html#55 Mythbusters Banned From Discussing RFID By Visa And Mastercard
https://www.garlic.com/~lynn/2012.html#9 Anyone sceptically about Two Factor Authentication?
https://www.garlic.com/~lynn/2011m.html#39 ISBNs
https://www.garlic.com/~lynn/2011l.html#71 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011l.html#18 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2010p.html#2 Fun with ATM Skimmers, Part III
https://www.garlic.com/~lynn/2010p.html#0 CARD AUTHENTICATION TECHNOLOGY - Embedded keypad on Card - Is this the future
https://www.garlic.com/~lynn/2010o.html#81 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2010o.html#52 Payment Card Industry Pursues Profits Over Security
https://www.garlic.com/~lynn/2010l.html#73 A mighty fortress is our PKI, Part II
https://www.garlic.com/~lynn/2010i.html#63 Wal-Mart to support smartcard payments
https://www.garlic.com/~lynn/2010f.html#27 Should the USA Implement EMV?
https://www.garlic.com/~lynn/2010f.html#26 Should the USA Implement EMV?
https://www.garlic.com/~lynn/2010d.html#25 Cambridge researchers show Chip and PIN system vulnerable to fraud
https://www.garlic.com/~lynn/2010d.html#24 Cambridge researchers show Chip and PIN system vulnerable to fraud
https://www.garlic.com/~lynn/2010d.html#17 Chip and PIN is Broken!
https://www.garlic.com/~lynn/2010.html#97 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010.html#72 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010.html#71 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2009r.html#41 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#16 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009q.html#78 70 Years of ATM Innovation
https://www.garlic.com/~lynn/aepay11.htm#64 EFTA to Adaopt ATM Ant-Fraud Measures
https://www.garlic.com/~lynn/aepay10.htm#41 ATM Scams - Whose Liability Is It, Anyway?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe MIPS Date: 23 Feb 2022 Blog: FacebookNote documentation on Z10->Z196 chips was addition of lots of memory latency compensating technology that have been in other chips (risc, i86, power), in some cases decades, out-of-order execution, branch prediction, speculative execution ... accounting for half of the throughput performance increase from Z10->Z196 chips. There was strong implication that part of the chip circuits were common to mainframe and power.
in the 90s, I86 documentation was hardware translating I86 instructions into a sequence of risc micro-ops ... which were then scheduled for execution (starting to negate the performance difference between i86 chips and risc chips).
trivia: some comments have been that memory are the new disks, the latency to memory (like on cache miss), when counted in number of processor cycles, is similar to latency to 60s disks when counted in 60s cpu processor cycles.
There use to be mainframe benchmark MIPS published (actually, number
of benchmark iterations compared to 158-3 assumed to be one MIPS)
... those stop and now have to infer based on statements of change
from previous processors
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
In the z196 time-frame, E5-2600 blades (two eight core chips) were
benchmarking at 500BIPS (same industry benchmark based on iterations
compared to 158-3) ... ten times max configured z196.
past posts mentioning z10->z196
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021i.html#2 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2019d.html#63 IBM 3330 & 3380
https://www.garlic.com/~lynn/2018f.html#12 IBM mainframe today
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2016b.html#103 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015h.html#81 IBM Automatic (COBOL) Binary Optimizer Now Availabile
https://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2014l.html#90 What's the difference between doing performance in a mainframe environment versus doing in others
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#67 Is end of mainframe near ?
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: From the Kingpins of Private Equity, A New Dagger to Democracy Date: 23 Feb 2022 Blog: FacebookFrom the Kingpins of Private Equity, A New Dagger to Democracy
PRIVATE EQUITY'S DIRTY DOZEN, 12 Firms Dripping In Oil And The Wealthy
Executives Who Run Them
https://public-accountability.org/wp-content/uploads/2022/02/PESP_SpecialReport_DirtyDozen_Feb2022-Final-LowRes.pdf
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
New Data Shows U.S. Government Has Been Bought For $14 Billion
https://www.counterpunch.org/2022/02/23/new-data-shows-u-s-government-has-been-bought-for-14-billion/
But yes, our politicians are more bought-off than ever before. A new
analysis from Americans For Tax Fairness found that total billionaire
contributions have soared over the past few years. The cycle before
the Citizens United decision only saw $16 million worth of donations
from billionaires to campaigns. This past cycle saw $2.6 billion worth
of donations.
...
So, all of this begs the question: How much did the U.S. spend? The
2020 election cost $14.4 billion.
... snip ...
... and regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Security Date: 23 Feb 2022 Blog: Facebookre:
in the mid-90s was asked to participate in the x9a10 financial standard group that had been given the requirement to preserve the integrity of the financial industry for all retail payments ... we did detail end-to-end look at payment transactions ... and found that there was a lot of dual use information at millions of locations around the world. The issue was that the same information was required for several business processes (at millions of locations) AND for transaction authentication (and could be used by crooks for fraudulent transactions). What was proposed in x9a10 ... rather than attempt to prevent breaches at millions of locations ... was make the information useless for fraudulent transactions (not addressing the breach of the information, just eliminating the motivation for criminals to perform the breaches) ... totally different information for transaction authentication and the dozens of business processes, of course this was disruptive technology and posed a threat to the existing status quo and many of the major stakeholders.
x9.59 standard
https://www.garlic.com/~lynn/x959.html
x9.59 posts
https://www.garlic.com/~lynn/subpubkey.html#x959
fraud posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
secrets and account number harvesting posts
https://www.garlic.com/~lynn/subintegrity.html#harvest
payment posts
https://www.garlic.com/~lynn/subintegrity.html#payments
data breach notification posts
https://www.garlic.com/~lynn/submisc.html#data.breach.notification.notification
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: From the Kingpins of Private Equity, A New Dagger to Democracy Date: 23 Feb 2022 Blog: Facebookre:
Wall Street Banker Profits Off Phoenix Housing Inflation and Soaring
Rent Prices
https://www.bloomberg.com/news/features/2022-02-18/wall-street-banker-profits-off-phoenix-housing-inflation-and-soaring-rent-prices
Private equity money is pouring into the Phoenix real estate market,
turning first-time homebuyers into renters.
... snip ...
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
new version of: "Company Towns: 1880s to 1935"
https://socialwelfare.library.vcu.edu/programs/housing/company-towns-1890s-to-1935/
In other cases, the company's motivations were less ideal. The
remoteness and lack of transportation prevented workers from leaving
for other jobs or to buy from other, independent merchants. In some
cases, companies paid employees with a scrip that was only good at
company stores. Without external competition, housing costs and
groceries in company towns could become exorbitant, and the workers
built up large debts that they were required to pay off before
leaving. Company towns often housed laborers in fenced-in or guarded
areas, with the excuse that they were "protecting" laborers from
unscrupulous traveling salesmen. In the South, free laborers and
convict laborers were often housed in the same spaces, and suffered
equally terrible mistreatment.
... snip ...
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 4th generation language Date: 23 Feb 2022 Blog: Facebookeven before SQL (& RDBMS) originally done on VM370/CMS (aka System/R at IBM SJR, later tech transfer to Endicott for SQL/DS and to STL for DB2) there were other "4th Generation Languages", one of the original 4th generation languages, Mathematica made available through NCSS (a cp67/cms spin-off of the ibm science center, cp67/cms virtual machine precursor to vm370/cms)
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
NOMAD
http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a
report that would have taken many hundreds of lines of Cobol to
produce. The product grew in capability and in revenue, both to NCSS
and to Mathematica, who enjoyed increasing royalty payments from the
sizable customer base. FOCUS from Information Builders, Inc (IBI),
did even better, with revenue approaching a reported $150M per
year. RAMIS moved among several owners, ending at Computer Associates
in 1990, and has had little limelight since. NOMAD's owners, Thomson,
continue to market the language from Aonix, Inc. While the three
continue to deliver 10-to-1 coding improvements over the 3GL
alternatives of Fortran, Cobol, or PL/1, the movements to object
orientation and outsourcing have stagnated acceptance.
... snip ...
other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica makes Ramis available to TYMSHARE for their
VM370-based commercial online service, NCSS does their own version
https://en.wikipedia.org/wiki/Nomad_software
and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to
Mathematica's RAMIS, the first Fourth-generation programming language
(4GL). Key developers/programmers of RAMIS, some stayed with
Mathematica others left to form the company that became Information
Builders, known for its FOCUS product
... snip ...
4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language
this mentions "first financial language" done in 60s at IDC (another
cp67/cms spinoff from the IBM cambridge science center)
https://www.computerhistory.org/collections/catalog/102658182
as an aside, a decade later, person responsible for FFL joins with
another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc
TYMSHARE topic drift ...
https://en.wikipedia.org/wiki/Tymshare
started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
other posts mentioning 4th gen. language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021c.html#29 System/R, QBE, IMS, EAGLE, IDEA, DB2
https://www.garlic.com/~lynn/2018e.html#45 DEC introduces PDP-6 [was Re: IBM introduces System/360]
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#83 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017c.html#85 Great mainframe history(?)
https://www.garlic.com/~lynn/2016e.html#71 Dinosaurisation of we oldies?
https://www.garlic.com/~lynn/2014e.html#34 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014c.html#77 Bloat
https://www.garlic.com/~lynn/2013m.html#62 Google F1 was: Re: MongoDB
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012d.html#59 A computer metaphor for systems integration
https://www.garlic.com/~lynn/2011p.html#1 Deja Cloud?
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 100 days after IBM split, Kyndryl signs strategic cloud pact with AWS Date: 24 Feb 2022 Blog: Facebook100 days after IBM split, Kyndryl signs strategic cloud pact with AWS. Kyndryl has signed strategic alliances with all three cloud hyperscalers.
A former coworker's father was an economist in the gov/IBM trial and said that all of the seven dwarfs testified that by the late 50s, every computer maker knew that the single most important thing for customers was a compatible product line. The issue was computer use was rapidly spreading ... as the uses increased customers were having to frequently move to the next more powerful computer ... and having to constantly redo applications for incompatible computer was major market inhibitor. For whatever reason, IBM executives were the only one that were able to force the plant managers to toe the line on the compatibility requirement. Being the only vendor that provided the single most important feature allowed IBM to dominate the market.
They appeared to loose sight of that in the early 70s when IBM was
going to replace (compatible) product with the (completely
incompatible) Future System. During FS, internal politics was shutting
down 370 efforts and the lack of new products during the FS period is
credited with giving the (compatible) clone makers their market
foothold. When FS finally implodes there is mad rush to get stuff back
into the 370 product pipelines ... more information
http://www.jfsowa.com/computer/memo125.htm
one of the final nails in the FS coffin was study by the IBM Houston Science Center that if 370/195 applications were redone for FS machine made out of the fastest available technology, they would have throughput of 370/145 (about 30 times slowdown).
Trivia: the folklore about Rochester doing greatly simplified FS as S/38 ... there was significant performance headroom between available technology and the throughput required by the S/38 market.
Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books,
1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project 1st half of the
70s:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive.
... snip ...
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some past posts mentioning compatible product line
https://www.garlic.com/~lynn/2018c.html#18 Old word processors
https://www.garlic.com/~lynn/2018c.html#16 Old word processors
https://www.garlic.com/~lynn/2017f.html#40 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2013i.html#73 Future of COBOL based on RDz policies was Re: RDz or RDzEnterprise developers
https://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2011h.html#36 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2010k.html#21 Snow White and the Seven Dwarfs
https://www.garlic.com/~lynn/2007t.html#70 Remembering the CDC 6600
https://www.garlic.com/~lynn/2007p.html#9 CA to IBM product swap
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM History Date: 25 Feb 2022 Blog: FacebookIBM history ... I con'ed into working with the 370/195 engineers, who were looking at hyperthreading the machine (simulate two processor multiprocessor). Hyperthreading (now common in many architectures ... two or more instruction streams in same processor) shows up in ACS/360 ... reference in this account of the end of acs/360 (ibm executives were afraid that it would advance the state-of-the-art too fast and IBM would loose control of the market)
370/195 had 64instruction pipeline and supported out-of-order execution, could run at 10MIPS, but didn't have branch prediction or speculative execution ... so conditional branches drained the pipeline and many codes only ran at 5MIPS. Adding support for two instructions streams (each operating at least 5MIPS) would help keep the machine running at 10MIPS (modulo operating system multiprocessor overhead, MVT 360/65MP was claiming only something like 1.2-1.5 times single processor)
multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
However, it wasn't long before the decision to add virtual memory to
all 370s ... would have required nearly complete redo for 195 ... as
it was, even 165 was non-trivial. Decade ago, I was asked if I could
track down the decision and found somebody involved ... basically MVT
storage management was bad that regions had to be four times larger
than typically used, limited typical 1mbyte 370/165 to four regions,
not sufficient to keep most 165s fully utilized. Going to 16mbyte
virtual address space would allow increasing number of regions by a
factor of four times with little or no paging ... also post (with some
of the response), also mentions other history, hasp/spooling, etc
https://www.garlic.com/~lynn/2011d.html#73
The "370 architecture" redbook was a CMS SCRIPT file ... command line option would generate ether the full "redbook" or just the 370 principles of operation subset. The full 370 virtual memory had a lot more stuff than was initially announced. The 165 engineers were eventually complaining for them to implement the full 370 virtual memory would slip the announce schedule by six months. Eventually it was decided to cut a lot out to keep the 165 on schedule (and all the other models that had implemented the full architecture would have to eliminate the cut features ... as well as any software using those features, would have to be redone).
Old post from 1981 about 85/165/168/3033/trout were all the same
machine:
https://www.garlic.com/~lynn/2019c.html#email810423
i.e. Future System in the early 70s was completely different than 370
and was going to completely replace it ... and during FS period, 370
efforts were being killed off (the lack of new 370 products during
that period is credited with giving clone 370 makers their market
foothold). When FS implodes, there is mad rush to get stuff back into
the 370 product pipelines, including kicking off the quick&dirty 3033
(started out remapping 168 logic to 20% faster chips) & 3081 (some
warmed over FS stuff) efforts in parallel. Some more FS details
http://www.jfsowa.com/computer/memo125.htm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
Once 3033 was out the door, the 3033 (85/165/168) processor engineers
start on trout ... announced (1985) as 3090
https://en.wikipedia.org/wiki/IBM_3090
trivia: the 23Mar1981 email mentions my "ramblings"; in the late 70s
and early 80s I was blamed for online computer conferencing on the
internal network. It really took off spring of 1981 after I
distributed trip report to see Jim Gray at Tandem. "Tandem Memos" get
mention in "IBM Jargon"
http://www.comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ..
six copies of approx 300 pages were printed, along with executive
summary and summary of the summary and placed in Tandem 3-ring binders
and sent to the corporate executive committee (folklore is 5of6 wanted
to fire me):
• The perception of many technical people in IBM is that the company is
rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to
affect revenue, at which point recovery may be impossible
• Many technical people are extremely frustrated with their management
and with the way things are going in IBM. To an increasing extent,
people are reacting to this by leaving IBM Most of the contributors to
the present discussion would prefer to stay with IBM and see the
problems rectified. However, there is increasing skepticism that
correction is possible or likely, given the apparent lack of
commitment by management to take action
• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive environment.
... snip ...
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
... took another decade (1981-1992) ... IBM had gone into the red and was being reorganized into the 13 "baby blues" in preparation for breaking up the company .... reference gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM History Date: 25 Feb 2022 Blog: Facebookre:
... aka (took another decade, 1981-1992), IBM had gone into the red
and was being reorganized into the 13 "baby blues" in preparation for
breaking up the company .... reference gone behind paywall but mostly
lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but we get a call from the bowels of Armonk asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in Gerstner as a new CEO and reverses the breakup).
Also we were hearing from former co-workers that top IBM executives were spending all their time shifting expenses from the following year to the current year. We ask our contact from the bowels of Armonk what was going on. He said that the current year had gone into the red and the executives wouldn't get a bonus. However, if they can shift enough expenses from the following year to the current year, even putting following year just slightly into the black ... the way the executive bonus plan was written, they would get a bonus more than twice as large as any previous bonus (rewarded for taking the company into the red).
AMEX was in competition with KKR for (private equity) LBO of RJR and
KKR wins. KKR runs into trouble and hires away AMEX president to help
with RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM Board hires away the AMEX ex-president as CEO who reverses
the breakup and uses some of the PE techniques used at RJR, at IBM
(gone 404 but lives on at wayback machine)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
becomes financial engineering company ... stock buybacks use to be
illegal (because it was too easy for executives to manipulate the
market ... aka banned in wake of '29crash)
https://corpgov.law.harvard.edu/2020/10/23/the-dangers-of-buybacks-mitigating-common-pitfalls/
Buybacks are a fairly new phenomenon and have been gaining in
popularity relative to dividends recently. All but banned in the US
during the 1930s, buybacks were seen as a form of market
manipulation. Buybacks were largely illegal until 1982, when the SEC
adopted Rule 10B-18 (the safe-harbor provision) under the Reagan
administration to combat corporate raiders. This change reintroduced
buybacks in the US, leading to wider adoption around the world over
the next 20 years. Figure 1 (below) shows that the use of buybacks in
non-US companies grew from 14 percent in 1999 to 43 percent in 2018.
... snip ...
Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks
https://web.archive.org/web/20140623003038/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
... snip ..
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts
Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM History Date: 25 Feb 2022 Blog: Facebookre:
A former coworker's father was an economist in the gov/IBM trial and said that all of the seven dwarfs testified that by the late 50s, every computer maker knew that the single most important thing for customers was a compatible product line. The issue was computer use was rapidly spreading ... as the uses increased customers were having to frequently move to the next more powerful computer ... and having to constantly redo applications for incompatible computer was major market inhibitor. For whatever reason, IBM executives were the only one that were able to force the plant managers to toe the line on the compatibility requirement. Being the only vendor that provided the single most important feature allowed IBM to dominate the market.
They appeared to loose sight of that in the early 70s when IBM was
going to replace (compatible) product with the (completely
incompatible) Future System. During FS, internal politics was shutting
down 370 efforts and the lack of new products during the FS period is
credited with giving the (compatible) clone makers their market
foothold. When FS finally implodes there is mad rush to get stuff back
into the 370 product pipelines ... more information
http://www.jfsowa.com/computer/memo125.htm
one of the final nails in the FS coffin was study by the IBM Houston Science Center that if 370/195 applications were redone for FS machine made out of the fastest available technology, they would have throughput of 370/145 (about 30 times slowdown). Trivia: the folklore about Rochester doing greatly simplified FS as S/38 ... there was significant performance headroom between available technology and the throughput required by the S/38 market.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM History Date: 25 Feb 2022 Blog: Facebookre:
some of the MIT CTSS/7094 people went to the 5th flr Project MAC to do MULTICS. Others went to the IBM science center on 4th flr to do virtual machine (CP40/CMS & CP67/CMS), internal network, lots of performance tools, etc. CTSS RUNOFF was redone for CMS as SCRIPT. In 1969, GML was invented at CSC and GML tag processing added to SCRIPT (a decade later GML morphs into ISO standard SGML, and after another decade morphs into HTML at CERN)
trivia: CSC wanted a model 50 for the virtual memory hardware changes
... but all the spare 50s were going to FAA ATC project ... so had to
settle for 360/40 for the hardware changes ... and created
cp40/cms. CP40/CMS morphs into CP67/CMS when 360/67 becomes available
standard with virtual memory. Lots more history (including Les' CP40)
https://www.leeandmelindavarian.com/Melinda#VMHist
quote here
https://www.leeandmelindavarian.com/Melinda/25paper.pdf
Since the early time-sharing experiments used base and limit registers
for relocation, they had to roll in and roll out entire programs when
switching users....Virtual memory, with its paging technique, was
expected to reduce significantly the time spent waiting for an
exchange of user programs.
What was most significant was that the commitment to virtual memory
was backed with no successful experience. A system of that period that
had implemented virtual memory was the Ferranti Atlas computer, and
that was known not to be working well. What was frightening is that
nobody who was setting his virtual memory direction at IBM knew why
Atlas didn't work.(23)
... snip ...
... from "CP/40 -- The Origin of VM/370"
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
What was most significant to the CP/40 team was the commitment to
virtual memory was backed with no successful experience. A system of
that period that had implemented virtual memory was the Ferfanti Atlas
computer and that was known to not be working well. What was
frightening is that nobody who was setting this virtual memory
direction at IBM knew why Atlas didn't work.
.... snip ...
CSC delivered CP67 Jan68 to univ. (after CSC itself, and Lincoln Labs) had no page thrashing controls and primitive page replacement algorithm. I added dynamic adaptive page thrashing controls, a highly efficient page replacement algorithm, dynamic adaptive resource management and scheduling ... significantly rewrote lots of code to reduce pathlengths, redid DASD i/o for seek and rotational optimization ... and a few other things.
Within a year after taking 2credit intro to fortran/computers, I was hired fulltime to be responsible for OS/360. The univ. was sold 360/67 for TSS/360 to replace the 709/1401, however TSS/360 never came to production fruition and so ran machine as 36/65 with OS/360. The univ. shutdown the datacenter over the weekend and I had the place (mostly) all to myself, although 48hrs w/o sleep could make Monday morning classes difficult. I also had to share it with the IBM TSS/360 SE who would use it to try and improve TSS/360. After getting CP67, I would also play with CP67 some. Early on (before starting to rewrite lots of CP67) the TSS/360 SE and I did a fortran edit/compile/execute benchmark simulating interactive users. For TSS/360 there were four simulated users. I did 35 simulated CP67/CMS users getting better response, throughput, performance etc than TSS/360 (w/four users).
Later at IBM I did a page-mapped file system for CMS ... and I would claim I learned what not to do from TSS/360. Much of FS "single-level-store" came from TSS/360 ... and was one of the reasons I periodically would ridicule what they were doing.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page map filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM History Date: 25 Feb 2022 Blog: Facebookre:
trivia: STRATUS was somewhat spinoff of the MULTICS people with redudannt specialized fault-tolerant hardware. In the 80s, IBM payed STRATUS an enormous amount of money to sell their machine logo'ed as S/88.
The last product we did at IBM started out as HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAX Cluster to RS/6000. I renamed it to HA/CMP (High Availability Cluster Multi-Processing) when I started doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors. The S/88 Product Administrator brought us in to pitch HA/CMP to Bellcore for the 1-800 system which had a 5-nines required. A single S/88 system update used a century of (5-9s) downtime. HA/CMP had multiple machine with fall-over ... so also handled (software) system upgrades (w/o outage). Before we got going, congress passes legislation that allowd customers to keep their 1-800 number when moving to different telcos. That required BELLCORE to reset to redo the 1-800 lookup software.
The S/88 Product Administrator also cons me into writing a section for
the corporate continuous available strategy document. However, it got
pulled when both Rochester (AS/400, combined follow-on to S/34, S/36, &
S/38) and POK (mainframe) both complained that they couldn't meet the
requirements. Old post about Jan1992 meeting with Oracle CEO on
cluster scale-up (16-system by mid92, 128-system by ye92)
https://www.garlic.com/~lynn/95.html#13
within a couple weeks, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. We depart IBM a few months later.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Fujitsu confirms end date for mainframe and Unix systems Date: 26 Feb 2022 Blog: FacebookFujitsu confirms end date for mainframe and Unix systems. Once Japanese giant's main squeezes, they're being ditched at end of decade
... a few years ago, IBM mainframe hardware revenue was only a couple percent of total ... but the mainframe group total was 25% (just about all software & services) and 40% of bottom line profit.
1992 IBM had gone into the red and was being reorg'ed into the 13
"baby blues" in preparation for breaking up the company .... reference
gone behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
IBM's mainframe funding engine had significantly dwindled ... much of the low hanging fruit had moved to other platforms. Mid-90s, financial industry, a perennial IBM mainframe funding source, was spending billions of dollars to move to "killer micros". The issue was that increasing workload and globalization was resulting in settlement no longer completing in the overnight batch window. Lots of the applications were from the 60s&70s ... having added real-time transaction options over the years ... but financial settlement was still batch operation done overnight. The implementations were being rewritten to do straight-through processing using parallelization on lots of killer micros to complete operations in real time. Some of us pointed out that they were using industry standard para parallelization libraries that had 100 times the overhead of cobol batch ... but were ignored. Then major pilots started deploying and things went up in flames, the overhead of the parallelization libraries totally swamping any anticipated throughput increase using lots of killer micros.
A few years after turn of the century, I became involved in a new effort ... basically a high level financial business language that generated fine-grain SQL operations that were easily parallelized on RDBMS clusters (enormously reduced application development and maintenance costs) ... relying on the significant work that RDBMS vendors (including IBM) had put into (non-mainframe) RDBMS cluster throughput (providing throughput, redundancy, fault tolerant, ACID properties, etc, a lot of stuff that we had worked on at IBM for HA/6000 and HA/CMP). Also industry fixed-block disk performance had significantly improved (IBM was no longer making their own disks, relying on CKD simulation using the same industry standard disks) and the industry processor chips were exceeding performance of mainframe processors (in 90s, I86 chips were implementing hardware layer that translated I86 instructions into risc micro-ops, instructions might take several machine cycles to finish, but multiple concurrent instructions resulted in multiple instructions finishing per machine cycle, no longer 60s/70s analysis calculate avg. elapsed instruction time giving total number of instruction throughput) ... we demo'ed to a number of financial industry organizations with high acceptance ... then hit a brick wall. Were finally told that executives still bore the scars from the failed efforts in the 90s and weren't going to try it again any time soon.
Now at the turn of the century, I had gotten a gig to look at throughput of a major financial outsourcing (handled small community banks to large TBTF, handling total processing, soup-to-nuts, for 500M credit card accounts). It had >40 max configured IBM mainframes (@$30M, >$1.2B) with none older than 18m, constant rolling upgrades ... number needed to finish settlement in the overnight batch windows (all running the same 450K cobol statement applications, account processing partitioned across the available systems). They had a large group that had been managing performance for decades, but had gotten somewhat myopically focused on specific analysis. Using some different analysis, was able to find 14% improvement ... another person brought in using yet different analysis found an additional 7% ... for 21% total (couple hundred million savings in IBM system costs). Mainframe hardware revenue was dwindling fast, mainframe business somewhat being sustained by software&services.
some recent posts mentioning 450k statement cobol app
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2018f.html#13 IBM today
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Fujitsu confirms end date for mainframe and Unix systems Date: 26 Feb 2022 Blog: Facebookre:
... past trivia: In 1980 STL was bursting at the seams and moving 300 people from IMS group to offsite bldg with dataprocessing back to STL datacenter. I got con'ed into doing channel-extender support so they could put "local" channel-attached 3270 controllers at the offsite bldg ... resulting in no perceptible response & human factors difference between STL and offsite. The hardware vendor tries to get IBM to release my support, but a group in POK playing with some serial stuff, gets it vetoed (concerned that if it was in the market, it would make it harder to justify releasing their stuff).
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
In 1988, I'm asked to help LLNL (national lab), standardize some serial stuff they are playing with, which quickly becomes fibre channel standard (including some stuff I had done in 1980) ... started 1gbit, full-duplex, 2gbit aggregate (200mbyte). Then in 1990, the POK people get their stuff finally released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec ... compared to 200mbytes/sec). Later some POK engineers get involved in FCS and define a heavy-weight protocol that drastically reduces the native throughput, eventually released as FICON.
The most recent published numbers I've found is a "peak I/O" benchmark for max configured z196 getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time, a (natives) FCS was announced for E5-2600 blade server claiming over a million IOPS (two such native FCS having higher throughput than 104 FICON running over FCS).
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
Note IBM mainframe this century, early industry benchmarks that infers
MIPS based on number of benchmark iterations compared to 370/158-3
assumed to be one MIPS, later IBM published numbers just based on
throughput compared to previous models.
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
... snip ...
Max configured z196 listed at $30M or $600,000BIPS. z196-era E5-2600 blade servers (same industry) benchmark at 500BIPS (ten times max configured z196) had IBM base list price of $1815 (before IBM sold off its server business) or $3.63/BIPS.
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Interdata Computers Date: 26 Feb 2022 Blog: Facebookwithin a year of 2credit intro to fortran/computers, univ hires me fulltime responsible for os/360. Univ. had been sold 360/67 for tss/360 to replace 709/1401. TSS/360 never came to production fruition, so univ ran as 360/65 w/os360. Univ. shutdown datacenter and I would have the whole place to myself, although 48hrs w/o sleep could make monday morning classes a little hard.
some people from science center came out and installed (virtual machine) CP67 (precursor to vm370) at the univ (3rd after csc itself and mit lincoln labs) and would get to play with it during my weekend dedicated time ... rewriting whole bunches of the code. original CP67 had support for 1052 & 2741 with automagic terminal type recognition (and used terminal controller SAD CCW to change terminal type scanning for each port). The univ. had some number of ASCII terminals and so I added ASCII support (extending automagic terminal type for ASCII). Trivia: when box arrived for CE to install tty/ascii port scanner in the 360 terminal controller, the box was labeled "Heathkit".
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
I then wanted to have single dial-in number for all terminals ... hunt
group
https://en.wikipedia.org/wiki/Line_hunting
Didn't quite work since I could switch line scanner for each port (on
IBM telecommunication controller), IBM had took short cut and hard
wired line speed for each port (TTY was different line speed from
2741&1052). Thus was born univ. project to do a clone controller,
built a mainframe channel interface board for Interdata/3 programmed
to emulate mainframe telecommunication controller with the addition it
could also do dynamic line speed determination. Later it was enhanced
with Interdata/4 for the channel interface and cluster of Interdata/3s
for the port interfaces. Interdata (and later Perkin/Elmer) sell it
commercially as IBM clone controller. Four of us at the univ. get
written up responsible for (some part of the) clone controller
business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer
around turn of the century ran into one of the descendants of the box handling majority of the credit card point-of-sale dialup terminals (east of the mississippi) ... some claim that it still used our original channel interface board design.
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
The clone controller business is claimed to be the major motivation
for the IBM Future System effort in the 70s (make the interface so
complex that clone makers couldn't keep up). From the law of
unintended consequences: FS was completely different from 370 and was
going to completely replace it and internal politics was shutting down
the 370 projects ... the lack of new IBM 370 offerings is claimed to
give the clone 370 processor makers their market foothold (FS as
countermeasure to clone controllers becomes responsible for rise of
clone processors). Some FS details
http://www.jfsowa.com/computer/memo125.htm
Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books,
1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference to the "Future System" project 1st half of the
70s:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive.
... snip ...
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM selectric terminals ran with tilt-rotate code (so needed translate from EBCDIC to tilt-rotate code) ... got a table of what characters were located where on the ball. What ever characters you think you have in the computer, has to be translated into the tilt-rotate code for that character on the ball. There were standard balls making a lot of tilt-rotate codes common. But it was possible to have things like APL-ball with lots of special characters and needing to know where they were on the ball and the corresponding tilt-rotate codes (change the selectric ball, could require having the corresponding table for character positions on the ball). So there really wasn't EBCDIC code for selectric terminals ... it was "tilt-rotate" code.
Teletype ASCII terminals did run with direct ASCII code (on ASCII
computers, just send the same data directly to terminal ... not
required to change into tilt-rotate code). "AND" the greatest computer
"goof", 360 was supposed to be ASCII, by the (IBM) father of ASCII,
gone 404 from wayback machine:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Who Goofed?
The culprit was T. Vincent Learson. The only thing for his defense is
that he had no idea of what he had done. It was when he was an IBM
Vice President, prior to tenure as Chairman of the Board, those lofty
positions where you believe that, if you order it done, it actually
will be done. I've mentioned this fiasco elsewhere. Here are some
direct extracts:
... snip ...
i.e. the ascii unit record equipment wasn't ready for the 360 announce so had to rely (supposedly temporarily?) on BCD machines (adapted for EBCDIC)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CICS, BDAM, DBMS, RDBMS Date: 26 Feb 2022 Blog: FacebookWithin a year of taking 2credit intro to fortran/computers, was hired fulltime by the Univ. responsible for OS/360 (running on 360/67 as 360/65). Univ. library got ONR grant to do online catalog, used some of the money to get 2321 datacell. Was also selected to be betatest for original CICS product and supporting CICS was added to tasks (never sent to CICS class and had no source). One of the first problems was CICS wouldn't start ... eventually figured out that CICS had some (undocumented) hard coded BDAM dataset options and library had built their datasets with different set of BDAM options.
cics &/or bdam posts
https://www.garlic.com/~lynn/submain.html#cics
cics history
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
Was involved in System/R (original SQl/RDBMS) and also tech transfer to Endicott (for SQL/DS) while company was focused on trying to turn out "EAGLE", next great DBMS (later when EAGLE implodes, request was how fast can System/R be ported to MVS ... eventually released as DB2). When Jim Gray departed IBM Research for Tandem, he palms off several things on me, including DBMS consulting with IMS group in STL and supporting some of the early System/R customers.
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
After leaving IBM, got brought into NIH national library of medicine ... which had done their own transaction processor (about the same time I was supporting CICS for the univ library) and had a very interesting BDAM structure (two of the original implementers were still there, so we had a lot of discussions).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM cannot kill this age-discrimination lawsuit linked to CEO Date: 26 Feb 2022 Blog: FacebookIBM cannot kill this age-discrimination lawsuit linked to CEO. Scientist's claim that Arvind Krishna unfairly had him axed found plausible enough for trial hearing
IBM had gone into the red and was being reorganized into the 13 "baby
blues" in preparation for breaking up the company .... reference gone
behind paywall but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
AMEX had been in competition with KKR for (private equity) LBO of RJR
and KKR wins. KKR runs into trouble and hires away AMEX president to
help with RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM Board hires away the AMEX ex-president as CEO who reverses
the breakup and uses some of the PE techniques used at RJR, at IBM
(gone 404 but lives on at wayback machine)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 100 days after IBM split, Kyndryl signs strategic cloud pact with AWS Date: 26 Feb 2022 Blog: Facebookre:
Kyndryl completes hyperscaler trifecta with AWS partnership. Now three
for three with the major cloud players after deals with Google and
Microsoft
https://www.theregister.com/2022/02/24/kyndryl_aws/
The spin out of Kyndryl from IBM came after years of declining
revenues at the division as clients opted to use cloud services
delivered by the hyperscalers instead of signing big ticket
outsourcing agreements for managed infrastructure.
It seems that Kyndryl has decided that if you can't beat 'em, join
'em. DXC Technology made the same move in 2018 and Atos decided this
month that traditional data centres services do not represent its
future.
... snip ...
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM DB2 Date: 27 Feb 2022 Blog: FacebookEarly System/R in the 70s ... the IMS group in STL were criticizing System/R ... because 1) it required twice as much disk space (for the key indexes) and 2) it required several times more disk I/O, for index lookup (IMS had direct record pointers as part of data). System/R counter was that IMS required significant more human management because of the direct record pointers (exposed in data). In the 80s, things radically began to shift, amount of disk MB exploded and $$/MB significantly dropped , computer memory significantly increased (allowing caching of indexes, reducing index search disk I/O), and drop in computer costs saw explosion in size of computer market ...w/o corresponding increase human DBAs which became a costly premium. The company was preoccupied with "EAGLE" ... the next great DBMS follow-on to IMS ... and it was possible to do System/R tech transfer to Endicott for SQL/DS. Later when "EAGLE" implodes, there is request for how fast can be ported to MVS ... which evenutally ships as DB2 (originally for decision support *ONLY*).
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
Late 80s, started HA/6000, originally for NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors.
IBM didn't have a DBMS for RS/6000 ... IBM Toronto was just starting "Shelby", simplified portable RDBMS originally for OS/2 ... eventually renamed DB2 ... and also ported to AIX ... but would be ages before it would have cluster scale-up support. The four major RDBMS vendors (Oracle, Ingres, Informix, and Sybase) all had VAXCluster support in the same source base for UNIX. I did a distributed lock manager with VAXCluster API semantics ... simplifying being able to get their VAXCluster RDBMS up and running on HA/CMP.
Old post about Jan1992 HA/CMP cluster scale-up meeting with Oracle CEO
(16 system by mid92, 128 system by ye92). One of the oracle executives
in the meeting said when he was in IBM STL, he had done most of the
System/R-SQL/DS port to MVS ... for what becomes DB2.
https://www.garlic.com/~lynn/95.html#13
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
System/R history get together
http://www.mcjones.org/System_R/
trivia: within a few weeks of the Oracle HA/CMP meeting, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*), and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframes Date: 27 Feb 2022 Blog: Facebookre:
In the 70s&80s, mainframe hardware was major part of the revenue
... that took a major hit in the late 80s and by 1992, IBM had gone
into the red and was being reorganized into the 13 "baby blues" in
prep. for breaking up the company (board eventually brought in a new
CEO that reversed the breakup). By turn of the century, mainframe
hardware sales were something like 5% of revenue. In the EC12
timeframe there was analysis that mainframe hardware sales was 4% of
revenue but the mainframe organization was 25% of IBM revenue
(software & services) and 40% of profit (huge profit margin from
software&services motivation to keep the mainframe market going).
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
Early in the century, "MIPS" running industry standard benchmark
program that is number of iterations compared to 370/158 iterations
(assumed to be 1MIPS) ... not actual instruction count. Later in the
century had to increasingly use throughput percent change from earlier
models.
z196 was 80 processors, 50BIPS (thousand MIPS) or 625MIPS/processor ... max configured $30M or $600,000/BIPS
At the same time e5-2600 blade, 500BIPS (ten times z196, same industry standard benchmark program, not actual instruction count), IBM base list price was $1815, $3.60/BIPS (this was before IBM sold off its server/blade business).
Note cloud vendors have been claiming for quite awhile that they've been assembling their own blades at 1/3rd the cost of major vendor price (i.e. $1.20/BIPS)
IBM sells off its server/blade business shortly after industry press says that major server chip vendors were shipping at least half their product directly to cloud operators; cloud operators turning computer hardware into commodity business ... they had so commoditized server costs, that power&cooling was increasingly becoming the major cost for cloud megadatacenters ... and increasingly applying pressure on chip vendors to improve chip power efficiency). A large cloud operation will have dozen or so megadatacenters around the world, each have half-million or more blade servers, staffed with 80-120 people (enormous automation).
Kyndryl completes hyperscaler trifecta with AWS partnership. Now three for three with the major cloud players after deals with Google and Microsoft
https://www.theregister.com/2022/02/24/kyndryl_aws/
The spin out of Kyndryl from IBM came after years of declining
revenues at the division as clients opted to use cloud services
delivered by the hyperscalers instead of signing big ticket
outsourcing agreements for managed infrastructure.
It seems that Kyndryl has decided that if you can't beat 'em, join
'em. DXC Technology made the same move in 2018 and Atos decided this
month that traditional data centres services do not represent its
future.
... snip ...
cloud megadatacenters
https://www.garlic.com/~lynn/submisc.html#megadatacenter
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframes Date: 27 Feb 2022 Blog: Facebookre:
Documentation from the late 90s was I86 vendors were doing hardware layer that translated i86 instructions into series of risc micro-instructions for execution ... being able to match risc performance advantage. I86 chip technology talking about elapsed number machine cycle for instruction ... but with enormous overlapped processing so they are actually finishing multiple instructions per cycle (360/370 were mostly purely serial one instruction at a time, so it was straight forward how to calculate instruction rate). In any case, giving each I86 server blade ten times the performance of a max configured mainframe (and hundreds of thousands of such blade servers with millions of processor cores in every cloud megadatacenter).
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
z196 documentation claimed the introduction of processing that have been in RISC and I86 chips for generations ... out-of-order execution, branch prediction, speculative execution, etc. ... accounted for half the improvement in per processor improvement from z10->z196 (i.e. 469MIPS->625MIPS).
Part of the issue are claims that memory access is the new disk ... aka memory access latency (on cache miss), when expressed as count of cpu cycles, is similar to the count of 60s cpu cycles for 60s disk access latency ... and so anything helping offset serial memory access latency can contribute significantly to overall throughput.
i86 article from 1995 (... some of the features start to show up in
the Z10->Z196 transition) Intel P6 1995 & risc micro-ops
https://halfhill.com/byte/1995-4_cover.html
Intel's term for this CISC/RISC hybrid instruction flow is dynamic
execution. You'll find the same basic mechanisms if you pry off the
lids of the latest RISC processors, including the IBM/Motorola PowerPC
604, the PowerPC 620, the Sun UltraSparc, the Mips R10000, the Digital
Alpha 21164, and the Hewlett-Packard PA-8000.
... snip ...
801/risc, iliad, romp, rios, pc/rt, rs/6000, etc posts
https://www.garlic.com/~lynn/subtopic.html#801
some posts referring to memory is the new disk
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2017i.html#46 Temporary Data Sets
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951
https://www.garlic.com/~lynn/2015.html#43 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014b.html#103 CPU time
https://www.garlic.com/~lynn/2011e.html#66 |What is the maximum clock rate given the state of today's technology?
recent instructions mentioning "risc micro-ops"
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021b.html#66 where did RISC come from, Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#1 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#62 instruction clock speed
https://www.garlic.com/~lynn/2016h.html#98 A Christmassy PL/I tale
https://www.garlic.com/~lynn/2016f.html#97 ABO Automatic Binary Optimizer
https://www.garlic.com/~lynn/2016d.html#68 Raspberry Pi 3?
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015h.html#101 This new 'skyscraper' chip could make computers run 1,000 times faster
https://www.garlic.com/~lynn/2015h.html#81 IBM Automatic (COBOL) Binary Optimizer Now Availabile
https://www.garlic.com/~lynn/2015c.html#110 IBM System/32, System/34 implementation technology?
https://www.garlic.com/~lynn/2015.html#44 z13 "new"(?) characteristics from RedBook
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: David Boggs, Co-Inventor of Ethernet, Dies at 71 Date: 01 Mar 2022 Blog: FacebookDavid Boggs, Co-Inventor of Ethernet, Dies at 71. Thanks to the invention he helped create in the 1970s, people can send email over an office network or visit a website through a coffee shop hot spot
I had HSDT project starting in early 80s, T1 and faster computer
links, working with the NSF director and was suppose to get $20M to
interconnect the NSF supercomputing centers ... giving several
presentations at various supercomputing locations ... then congress
cuts the budget, some other things happen and eventually an RFP is
released (in part based on what we already had running). Copy of
"Preliminary Announce" (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12
"The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet".
.... snip ...
... internal IBM politics prevent us from bidding on the RFP (possibly
contributing was being blamed for online computer conferencing), the
NSF director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO), with support from other agencies, but that just makes the internal
politics worse (as did claims that what we already had running was at
least 5yrs ahead of the winning bid, RFP awarded 24Nov87). The RFP called for T1 links
... but they only putting in 440kbit/sec links ... then to make it
look like they were meeting the RFP, they put in T1 trunks with telco
multiplexors running multiple 440kbit links. As regional networks
connect in, it becomes the NSFNET backbone, precursor to modern
internet
https://www.technologyreview.com/s/401444/grid-computing/
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Other trivia (IBM) communication group was fiercely fighting off client/server and distributed computing and trying to prevent mainframe TCP/IP from being released ... when they lost that battle they changed their tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them, what shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. I then did the enhancements for RFC1044 and in some tuning tests at Cray Research between Cray and (IBM) 4341, got 4341 sustained channel throughput ... around 500 times improvment in bytes moved per instruction executed.
rfc 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
The communication group (and others) were spreading all sorts of
misinformation internally inside IBM ... things like SNA/VTAM could be
used for OASC NSFnet. Somebody collected copies of lots of their
internal email and forwarded to me ... previously posted, heavily
redacted and snipped to protect the guilty:
https://www.garlic.com/~lynn/2006w.html#email870109
coworker at IBM cambridge science center was responsible for the
technology for the internal corporate network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s) ... and also used for corporate sponsored university BITNET
https://en.wikipedia.org/wiki/BITNET
we then transfer out to san jose research in 1977 ... old article
Ed/Gillmore SJMN article gone behind paywall, but lives free at
wayback machine (Ed had transferred to San Diego by this time, and
recently passed Aug2020)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
IN 1980, some engineers at International Business Machines Corp. tried
to sell their bosses on a forward-looking project: connecting a large
internal IBM computer network to what later became the Internet. The
plan was shot down, and IBM's leaders missed an early chance to grasp
the revolutionary significance of this emerging medium.
... snip ...
Also from wayback machine, some references off Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
PC/RT trivia: AWD did their own 4mbit T/R card for the PC/RT (PC/AT bus) . Then for RS/6000 (with microchannel), AWD was told they couldn't do their own cards, but could only use the (heavily kneecapped by the communication group) PS2 microchanel cards. Turns out the PC/RT 4mbit T/R had card higher throughput than the PS2 16mbit T/R card. New Almaden research center was heavily provisioned with CAT4, expecting to use with T/R ... however they found that $69 CAT4 ethernet cards had higher throughput than the $800 16mbit T/R card ... as well as ethernet LAN had higher aggregate throughput and lower latency than 16mbit T/R.
risc/801, iliad, romp, rios, pc/rt, rs/6000, power, etc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: David Boggs, Co-Inventor of Ethernet, Dies at 71 Date: 01 Mar 2022 Blog: Facebookre:
I was on XTP technical advisory board starting 2nd half 80s (IBM communication group did eveything they could to block it) ... started by Greg chesson at sgi ... XTP included being able to stream full 100mbit/sec on FDDI.
Xpress Transport Protocol
https://en.wikipedia.org/wiki/Xpress_Transport_Protocol
.... above states that XTP had no congestion avoidance algorithms ... however, I wrote into the XTP spec the dynamic adaptive rate-based protocol that we had been using in HSDT ... for congestion avoidance.
xtp posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
Also in 1988 was asked to help LLNL (national lab) standardize some serial stuff they were playing with, which quickly becomes Fibre Channel Standard (including some stuff I had done in 1980). About same time, DARPA was trying to get 600mbit/sustained over serial hippi.
In 1980, STL was bursting at the seams and were moving 300 people from the IMS group to offsite bldg. They had tried "remote 3270" and found the human factors unacceptable. I get con'ed into doing channel extender support so they place channel attached 3270 controllers at the offsite bldg ... with no perceptible human factors difference between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get it veto'ed (afraid if it was in the market, it would make it harder to justify their stuff).
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
They get their stuff released in 1990 with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec, FCS 1gbit, full-duplex 2gbit aggregate, 200mbyte/sec). One of the AWD, RS/6000 engineers had worked tweaking ESCON, faster&full duplex, incompatible ... available on RS/6000 as SLA (serial-link adapter). We finally talked a vendor with high-performance router box (a few hundred mbit/sec backplane) to add SLA interface to their box .... could have T1 & T3 links and/or up to 16 10mbit enet links) ... allowing RS/6000 SLA to interoperate. The engineer then wants to start on SLA/800mbit ... and we have long discussions to get him working with fibre channel instead.
I have long-winded old email that I sent about RS6000/AIX buffer copies for high-speed I/O were starting to consume more processor cycles than I/O driver instructions.
risc/801, iliad, romp, rios, pc/rt, rs/6000, power, etc
https://www.garlic.com/~lynn/subtopic.html#801
Some POK engineers then get involved with FCS and define a heavy weight protocol that drastically reduces native throughput ... which eventually ships as FICON. The most recent published benchmark is z196 "Peak I/O" using 104 FICON (running over 104 FCS) getting 2M IOPS. About same time a FCS is announced for E5-2600 blades claiming over million IOPS (two such FCS getting higher throughput than 104 FICON).
ficon posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: David Boggs, Co-Inventor of Ethernet, Dies at 71 Date: 01 Mar 2022 Blog: Facebookre:
Early 80s, started HSDT effort, T1 and faster computer links (both satellite and terrestrial)
1988 was asked to help LLNL (national lab) standardize some serial
stuff they were playing with, which quickly becomes fibre channel
standard (including some stuff I had done in 1980).
https://en.wikipedia.org/wiki/Fibre_Channel
About the same time, I was asked to attend some SLAC/Gustavson/SCI
meetings
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
Starting in 2nd half 80s, was also on the XTP (very high speed tcp/ip)
technical advisory board, started by Greg Chesson at SGI, XTP engine
stream pipelining (somewhat akin to SGI graphics display engine).
https://en.wikipedia.org/wiki/Xpress_Transport_Protocol
... above XTP reference states no congrestion control, but I wrote rate-based protocol into the specification, including dynamic adaptive for congrestion control, that we had been using in HSDT for congrestion control. Trivia: 1988 ACM SIGCOMM article showed how (old-fashion) "windowing" congestion control was non-stable in large heterogeneous multi-hop network (like internet, including returning ACKs tended to clump along the way resulting in multi-packet window burst opening when ACK-bunch arrived).
xtp posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Late 80s, started HA/6000, originally for NYTimes to move their
newspaper system (ATEX) off VAXCluster, I renamed it to HA/CMP (High Availability Cluster Multi-Processing) when
started doing technical/scientific cluster scale-up with national labs
and commercial cluster scale-up with RDBMS vendors (16 systems mid92,
128 systems ye92)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
This included the work with LLNL on FCS as well as port of their Cray LINCS filesystem spinoff, "Unitree", to HA/CMP.
Mainframe DB2 was complaining about (commercial) HA/CMP being far ahead of them, which likely contributed to cluster scale-up being transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
from long ago and far away, RIOS (i.e. rs6000), high-speed i/o, (LLNL)
Unitree, XTP, etc.
Date: Jun 15 09:14:21 1990
From lynn
To: zzzz@ausvm6
cc: zzz2@ausvm6, zzz3@ausvm6, zzz4@ausvm6, zzz5@ausvm6, zzz6
Subject: i/o interface
There is a somewhat separate (but the same) problem associated with
the I/O interface. The current tack is to "address the programming
model" in order to contain the number of instructions executed
associated with performing I/O.
The actual performance bottleneck has to do with minimizing the
processer cycles associated with doing I/O movement operations. The
programming model is beginning to approach a limit such that the major
processor cycle component associated with I/O operations are buffer
copies.
A major parallel attack on the processor cycle overhead going on in
the industry is to eliminate buffer copies. Every event that I
attended this week was full of no buffer copy discussions; "buffer
copy" is the current major element of processor cycle overhead; posix
threads & posix asynch I/O (features for doing I/O out of buffers in
user space w/o copying); major multithread applications using no
buffer copy from user space.
WHAT IS THE PROBLEM?
The problem is the RS/6000 memory/cache inconsistency in the I/O
interface (something I've had discussions about for more than 10
years). Currently in AIX/V3, I/O buffers/data/control is copied to
kernel and managed by knowledgeable code that has cache line
sensitivity and memory/cache inconsistency awareness. This is highly
sophisticated and non-portable code.
The one case that AIX/V3 supports no buffer copy (i.e. direct I/O
out of user space buffers), the I/O subsystem invalidates the virtual
pages that contain any I/O buffers. The page invalidate makes sure any
application doesn't accidentally shoot itself in the foot with
memory/cache inconsistency because of I/O operations.
The net result (page invalidates) is that if the application is a
multi-threaded subsystem doing (potentially multiple) asynchronous I/O
AND isn't careful about making all buffers page aligned AND page-sized
multiples .... then it is possible for the application to be forced
into single-thread, serialized execution because of page faults with
accidentally touching pages that are invalidated. If two (or more)
buffers happen to reside in the same virtual page (say lots of small
LAN environment buffers associated with the "small" bi-model
requests), and if (at least) one of the buffers is on an active I/O
buffer ring (and therefor the page is invalid) and the application
attempts to do anything with one of the other buffers (in the same
page), then the application takes a page fault and marked "blocked"
(non-executable) until the I/O complete and the page is validated.
Effectively application execution is serialized/blocked when doing
I/O.
SOLUTION?
A solution is to modify each of this portable applications (like the
Unitree filesystem or other applications using XTP "no
buffer copy" mode) and add AIXV3 "page invalid" bypass and RIOS
"cache line" sensitivity/awareness. Each, high-performance, portable
application will have to have these modifications under conditional
compile for AIXV3/RIOS environment.
To simplify future portability of such code, it would be useful if the
AIX/V3 compiler supported a compile-time symbol "Cache-Line.Size"
which could be used in the code to symbolic define the cache-line
boundaries. That way if the cache-line size changed on future
machines, each on of these applications wouldn't have to be modified
again.
... snip ... top of post, old email index
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET pioneer Jack Haverty says the internet was never finished Date: 01 Mar 2022 Blog: Facebookre:
ARPANET pioneer Jack Haverty says the internet was never
finished. When he retired with stuff left on his to-do list, he
expected fixes would flow. They haven't
https://www.theregister.com/2022/03/01/the_internet_is_so_hard/
Late 80s, got HA/6000 product, started out for NYTimes to port
newspaper system (ATEX) off VAXcluster to RS/6000. I rename it to
HA/CMP (High Availability Cluster Multi-Processing)
when start doing scientific/technical cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors
(Oracle, Ingres, Informix, Sybase) ... in part because they had
VAXcluster support in same support base with UNIX. I do distributed
lock manager with VAXcluster API semantics to ease the transition. Old
post with Jan1992 HA/CMP scale-up meeting with Oracle CEO (16 system by
mid1992, 128 system by ye1992)
https://www.garlic.com/~lynn/95.html#13
within a few weeks, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work with more than four processors. We leave IBM a few months later.
Not long later, we are brought into small client/server startup as
consultants, two of the (former) Oracle people (in Ellison meeting)
are there responsible for something called "commerce server" and want
to do payment transactions on the server, the startup had also
invented this technology called "SSL" they want to use, it is now
frequently called "electronic commerce". I have complete authority for
everything between the servers and the (financial) payment networks
... but could only make recommendations on the browser/server side
... some of which were almost immediately violated. Later I have a
talk about "Why Internet Isn't Business Critical Dataprocessing"
(based on compensating procedures and software I had to do for
e-commerce) ... talk sponsored by (Internet IETF RFC editor) Postel:
https://en.wikipedia.org/wiki/Jon_Postel
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Internete financial gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some "Why Internet Isn't Business Critical Dataprocessing" posts
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET pioneer Jack Haverty says the internet was never finished Date: 02 Mar 2022 Blog: Facebookre:
Part of preparing for NSF supercomputer activity, which frequently had
NSC Hyperchannel boxes ... got a number ... and used with T1 satellite
link between IBM Los Gatos and Clementi
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (this was different and not related to the 80s IBM
Kingston's "supercomputer" effort). Clementi's lab had boatload of FPS
boxes (with 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
NSC HYPERchannel boxes had 50mbit coax LANs with up to four LAN attachments for each box (all this before internal politics started kicking in). https://en.wikipedia.org/wiki/HYPERchannel
For some reason, other groups in IBM would order HYPERchannel boxes
for some sort of testing and then afterwards they would appear on HSDT
equipment list in warehouse. Doing some work with (UTexas) Balcones
Research ... managed to denote several of the (warehoused) boxes to
Balcones supercomputer datacenter
https://en.wikipedia.org/wiki/J._J._Pickle_Research_Campus
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
posts mmentioning Clementi's E&S lab
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#121 HSDT & Clementi's Kingston E&S lab
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021j.html#95 This chemist is reimagining the discovery of materials using AI and automation
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021e.html#14 IBM Internal Network
https://www.garlic.com/~lynn/2021c.html#53 IBM CEO
https://www.garlic.com/~lynn/2021b.html#1 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#63 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018f.html#110 IBM Token-RIng
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2018e.html#71 PDP 11/40 system manual
https://www.garlic.com/~lynn/2018b.html#47 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017j.html#92 mainframe fortran, or A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017h.html#50 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2016e.html#95 IBM History
https://www.garlic.com/~lynn/2014j.html#35 curly brace languages source code style quides
https://www.garlic.com/~lynn/2014c.html#72 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014c.html#63 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014b.html#4 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2013i.html#14 The cloud is killing traditional hardware and software
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 3380 disks Date: 03 Mar 2022 Blog: Facebookwhen I transfer to SJR research in 1977, I get to wander around lots of silicon valley, both IBM and non-IBM datacenters, including the disk engineering (bldg14) and product test (bldg15) across the street. They were doing prescheduled, 7x24, stand-alone mainframe testing. They mentioned they had tried MVS, but it had 15min mean-time-between-failure in that environment (requiring manual IPL). I offer to rewrite input/output supervisor, making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing ... greatly improving productivity. Downside, they would sometime have a kneejerk reaction to hardware problems, blame me, and I would have to spend increasing amount of time playing disk engineer diagnosing their issues.
Trivia: original 3380 had 20 track spacings between every data track. For 3380E, the spacing was cut in half to double the number of cyls-tracks, for 3380K, it was cut again, tripling the number of cyls-tracks.
I wrote up a (internal) research report on the work and happened to mention MVS 15min MTBF ... which brings the wrath of the MVS organization down on my head (they hated anybody tarnishing their carefully managed image, even if it was purely internal). So when 3380 was about to ship ... it didn't bother me that FE had test case of 57 simulated errors that would be expected in live operation ... and MVS was (still) failing in all 57 cases (requiring re-IPL) ... and in 2/3rds of the cases there was no indication of what caused the failure.
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
other triva: the father of IBM 801/RISC cons me into helping him with his idea for 3380 "wide" disk head ... would read/write 16 closely spaced data tracks in parallel (following servo-tracks on both sides of the 16 data track grouping) ... for 48mbyte/sec transfer. Didn't get any acceptance in IBM since best IBM mainframe channels were 3mbytes/sec ... and even the later ESCON (w/ES9000) was only 17mbytes/sec
dasd, ckd, fba, vtocs, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, etc posts
https://www.garlic.com/~lynn/subtopic.html#801
posts mentioning "wide" disk head
https://www.garlic.com/~lynn/2021f.html#44 IBM Mainframe
https://www.garlic.com/~lynn/2019.html#58 Bureaucracy and Agile
https://www.garlic.com/~lynn/2018f.html#33 IBM Disks
https://www.garlic.com/~lynn/2018b.html#111 Didn't we have this some time ago on some SLED disks? Multi-actuator
https://www.garlic.com/~lynn/2017d.html#60 Optimizing the Hard Disk Directly
https://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2009k.html#75 Disksize history question
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Link FEC and Encryption Date: 03 Mar 2022 Blog: FacebookNote for the IBM internal road-warrior/home program, IBM had special 2400 modem cards that would do hardware encryption of transmitted data. I had HSDT project (T1 and faster computer links) and was working with Cyclotomics (founded by Berlekamp, later bought by Kodak) on Reed-Solomon forward error correcting (encoded transmission with 1/16s additional data could get 6-nines improvement in bit-error-rate). Note Cyclotomics also provided the encoding standard for CDROMs. They also presented a gimmick for selective resend (when RS couldn't handle) ... instead of retransmitting the original data ... they would transmit the 1/2 rate Viterbi FEC (same size record ... but both original and Viterbi FEC could have errors, and still be able to recover). If environment got so bad they were constantly having to transmit the 1/2 rate Viterbi ... they would switch to sending it with the initial record until signal environment cleared up.
IBM also had requirement that internal network (technology done by co-worker at science center, non-SNA, larger than arpanet/internet from just about the beginning until sometime mid/late 80s) links also had to be encrypted. IBM 37x5 boxes only supported up to 56kbit ... so link encryptors weren't too hard to find. However for HSDT, I really hated what I had to pay for T1 encryptors and faster encryptors were almost impossible to find.
trivia: early 80s, I did some timing tests with the DES encryption standard ... ran about 150kbytes/sec on 3081K processor ... both processors would have to be dedicated for encryption/decryption handling a full-duplex T1.
posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
some posts mentioning Reed-Solomon, Viterbi, and/or Cyclotomics
https://www.garlic.com/~lynn/2021k.html#61 IBM Lasers
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2019.html#8 Network names
https://www.garlic.com/~lynn/2017g.html#52 Boyd's OODA-loop
https://www.garlic.com/~lynn/2016f.html#57 Oldest computer in the US government
https://www.garlic.com/~lynn/2016c.html#57 Institutional Memory and Two-factor Authentication
https://www.garlic.com/~lynn/2015g.html#9 3380 was actually FBA?
https://www.garlic.com/~lynn/2015g.html#6 3380 was actually FBA?
https://www.garlic.com/~lynn/2015e.html#3 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#55 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014k.html#9 Fwd: [sqlite] presentation about ordering and atomicity of filesystems
https://www.garlic.com/~lynn/2014j.html#68 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2014g.html#75 non-IBM: SONY new tape storage - 185 Terabytes on a tape
https://www.garlic.com/~lynn/2013n.html#34 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013n.html#33 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013n.html#31 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013m.html#102 Interesting? How _compilers_ are compromising application security
https://www.garlic.com/~lynn/2013k.html#58 DASD, Tape and other peripherals attached to a Mainframe
https://www.garlic.com/~lynn/2011o.html#65 Hamming Code
https://www.garlic.com/~lynn/2011g.html#69 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#60 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#58 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010i.html#23 Program Work Method Question
https://www.garlic.com/~lynn/2010g.html#26 Tapes versus vinyl
some ohter posts mentioning link encryptors:
https://www.garlic.com/~lynn/2022.html#125 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#83 Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#72 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2021h.html#84 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#66 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#8 IBM Travel
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019d.html#36 The People Who Invented the Internet: #Reviewing The Imagineers of War
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018d.html#72 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018d.html#33 Online History
https://www.garlic.com/~lynn/2018.html#10 Landline telephone service Disappearing in 20 States
https://www.garlic.com/~lynn/2017g.html#91 IBM Mainframe Ushers in New Era of Data Protection
https://www.garlic.com/~lynn/2017g.html#35 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017e.html#50 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017d.html#10 Encryp-xit: Europe will go all in for crypto backdoors in June
https://www.garlic.com/~lynn/2017b.html#44 More on Mannix and the computer
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Link FEC and Encryption Date: 03 Mar 2022 Blog: Facebookre:
got 2741 at home March1970 with a wooden acoustic coupler ... summer of 1977 was replaced with 300 baud ascii CDI miniterm, got 2741 at home March1970 with a wooden box "acoustic coupler" (placed handset in and closed the lid) ... summer of 1977 was replaced with 300 baud ascii CDI miniterm, then replaced with a 1200 baud ascii (IBM) 3101 glass teletype, then my own IBM PC (with one of IBM's 2400baud encrypting modem cards).
other trivia ... within a year of 2credit intro to fortran/computers, univ hires me fulltime responsible for os/360. Univ. had been sold 360/67 for tss/360 to replace 709/1401. TSS/360 never came to production fruition, so univ ran as 360/65 w/os360.
then some people from science center came out and installed (virtual machine) CP67 (precursor to vm370) at the univ (3rd after csc itself and mit lincoln labs) and would get to play with it during my weekend dedicated time ... rewrtting whole bunches of the code. original CP67 had support for 1052 & 2741 with automagic terminal type recognition (and used terminal controller SAD CCW to change terminal type scanning for each port). The univ. had some number of ASCII terminals and so I added ASCII support (extending automagic terminal type for ASCII). Trivia: when box arrived for CE to install tty/ascii port scanner in the 360 terminal controller, the box was labeled "Heathkit".
I then wanted to have single dial-in number for all terminals ... hunt
group
https://en.wikipedia.org/wiki/Line_hunting
Didn't quite work since I could switch line scanner for each port (on
IBM telecommunication controller), IBM had took short cut and hard
wired line speed for each port (TTY was different line speed from
2741&1052). Thus was born univ. project to do a clone controller,
built a mainframe channel interface board for Interdata/3 programmed
to emulate mainframe telecommunication controller with the addition it
could also do dynamic line speed determination. Later it was enhanced
with Interdata/4 for the channel interface and cluster of Interdata/3s
for the port interfaces. Interdata (and later Perkin/Elmer) sell it
commercially as IBM clone controller. Four of us at the univ. get
written up responsible for (some part of the) clone controller
business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer
360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
The clone controller business is claimed to be the major motivation
for the IBM Future System effort in the 70s (make the interface so
complex that clone makers couldn't keep up). From the law of
unintended consequences: FS was completely different from 370 and was
going to completely replace it and internal politics was shutting down
the 370 projects ... the lack of new IBM 370 offerings is claimed to
give the clone 370 processor makers their market foothold (FS as
countermeasure to clone controllers becomes responsible for rise of
clone processors). Some FS details
http://www.jfsowa.com/computer/memo125.htm
When FS implodes, there was mad rush to get stuff back into the 370
product pipelines, including kicking off quick&dirty 3033 and 3081
efforts in parallel.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
around turn of the century ran into one of the descendants of our Interdata box handling majority of the credit card point-of-sale dialup terminals (east of the mississippi) ... some claim that it still used our original channel interface board design (in some smaller retail stores could still hear the sound of the credit card terminals connecting).
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Disks Date: 03 Mar 2022 Blog: Facebookwithin a year of 2credit intro to fortran/computers, univ hires me fulltime responsible for os/360. Univ. had been sold 360/67 for tss/360 to replace 709/1401. TSS/360 never came to production fruition, so univ ran as 360/65 w/os360. Univ. shutdown datacenter and I would have the whole place to myself, although 48hrs w/o sleep could make monday morning classes a little hard.
Before I graduate, I'm hired fulltime into small group in the Boeing CFO's office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter was possibly largest in the world, couple hundred million in 360s, IBM 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton datacenter manager and CFO (who just had a 360/30 up at Boeing field for payroll, although they expanded the machine room and added a 360/67 for me to play with, when I wasn't doing other stuff). Nearly everything 2314s. When I graduate, I join the IBM science center (instead of staying at Boeing).
Later when I transfer to IBM San Jose Research, I get to wander around both IBM & non-IBM datacenters in silicon valley, including disk engineering (bldg14) and product test (bldg15) across the street. They were running prescheduled, 7x24, stand alone machine testing ... they said they had recently tried MVS ... but it had 15min mean-time-between-failure in that environment (requiring manual re-ipl). I offer to rewrite I/O supervisor, making bullet-proof and never fail, allowing any amount of concurrent, on-demand testing ... greatly improving productivity. Also product test tended to get very early engineering processor machines (for disk channel testing) and when they got early (#3 or #4) engineering 3033, we found a couple strings of 3330s and 3830 controller to set up private online service (since disk i/o testing only took percent or two of processing).
One monday morning, I get an irate call from product test asking what I had done to the 3033 system over the weekend because response and throughput degraded enormously. I said nothing. Eventually find that somebody had replaced 3830 controller with 3880 controller. Turns out 3830 had fast horizontal microprogrammed processor ... the 3880 had a very slow vertical microprogrammed jib-prime processor (although it had fast datastreaming hardware data transfer bypass that would handle up to 3mbyte/sec, everything else was enormously slower). They had initially tried faking it by 3880 signaling channel program end interrupt to the processor early and then figuring they could actually finish the controller processing while the system was fiddling around handling the interrupt. Some early testing with MVS went fine ... however my rewrite of the I/O system had 1/10th to 1/20th the pathlength of MVS I/O processing (while at the same time having much higher reliability and integrity) and would try to start the next operation while the 3880 controller was still busy (even though it had signaled operation complete). I would claim that part of 370/xa I/O features was to try and offset how bad the MVS I/O pathlength was (as well as the "Z" system assist dedicated processors).
getting to play disk engineering posts
https://www.garlic.com/~lynn/subtopic.html#disk
There was also somebody running "air bearing" simulation (part of
design for thin-film, floating disk heads, originally used for 3370
fixed-block disks) on research's 370/195 MVT system ... however, even
with high priority designation, they were still getting one to two
week turn arounds. We set them up so they could run on bldg15 3033,
and even though it was less than half the processing of 370/195, they
could still get several turn arounds a day.
https://en.wikipedia.org/wiki/Disk_read-and-write_head#Thin-film_heads
dasd, ckd, fba, multi-track search, etc. posts
https://www.garlic.com/~lynn/submain.html#dasd
In the early 80s, I'm introduced to John Boyd and would sponsor his
briefings at IBM. He had some number of stories, including being very
vocal that the electronics across the trail wouldn't work ... so
possibly as punishment, he is put in command of "spook base" (about
the same time I'm at Boeing). One biography mentions "spook base" was
a $2.5B "windfall" for IBM (ten times Renton datacenter, although
details only mention two 360/65s). ref gone 404, but lives on at
wayback machine
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White
trivia: in 89/90 Commandant of Marine Corps leverages Boyd for a corps
"make over" at the same time that IBM was desperately in need of "make
over" ... aka, IBM was heading into the red and was being reorged
into the 13 "baby blues" in preparation for breaking up the
company. .... reference gone behind paywall but mostly lives free at
wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Boyd posts and URL refs
https://www.garlic.com/~lynn/subboyd.html
some past posts mentioning air bearing simulation &/or thin-film,
floating heads
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#97 This chemist is reimagining the discovery of materials using AI and automation
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019d.html#107 IBM HONE
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017g.html#95 Hard Drives Started Out as Massive Machines That Were Rented by the Month
https://www.garlic.com/~lynn/2017d.html#71 Software as a Replacement of Hardware
https://www.garlic.com/~lynn/2016c.html#3 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015b.html#61 ou sont les VAXen d'antan, was Variable-Length Instructions that aren't
https://www.garlic.com/~lynn/2014l.html#78 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2011p.html#134 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#26 Deja Cloud?
https://www.garlic.com/~lynn/2011n.html#36 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2011f.html#87 Gee... I wonder if I qualify for "old geek"?
https://www.garlic.com/~lynn/2011.html#60 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#57 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Frameworks Quagmire Date: 03 Mar 2022 Blog: Facebook... from having been involved in e-commerce in the early 90s,
got dragged into financial industry standards and financial industry
critical infrastructure protection
https://en.wikipedia.org/wiki/Critical_infrastructure_protection
note software.org still around
https://software.org/
... some stuff from late 90s and first part of century:
slightly related (software) complexity (now behind some sort of
registration and/or totally gone but lives on at the wayback machine)
"The Frameworks Quagmire"
https://web.archive.org/web/20060831110450/http://www.software.org/quagmire/
2167A (near the top right in the above figure) typically required ten time the effort as standard industrial strength dataprocessing (which i've frequently shown can be ten times the development effort of typical web application).
In the late 90s, I held a number of sessions looking at increasing the tool sophistication ... requiring corresponding increase in learning curve and skills ... which would reduce the 2167A associated effort by a factor of five times (making it possibly only 2-3 times that of typical industrial strength dataprocessing effort).
posts referencing gateway between e-commerce (internet/web) server
and financial payment networks:
https://www.garlic.com/~lynn/subnetwork.html#gateway
... and posts mentioning software quagmire and/or 2167A
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2014f.html#13 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2012f.html#30 Before Disruption...Thinking
https://www.garlic.com/~lynn/2011p.html#101 Perspectives: Looped back in
https://www.garlic.com/~lynn/2010g.html#61 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#60 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#17 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#16 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010c.html#66 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#64 Happy DEC-10 Day
https://www.garlic.com/~lynn/2006q.html#40 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006q.html#13 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006.html#37 The new High Assurance SSL Certificates
https://www.garlic.com/~lynn/2005u.html#17 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005d.html#52 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2004q.html#46 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#1 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2003k.html#16 Dealing with complexity
https://www.garlic.com/~lynn/2002e.html#70 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#69 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#59 Computers in Science Fiction
https://www.garlic.com/~lynn/2001i.html#55 Computer security: The Future
https://www.garlic.com/~lynn/aadsm27.htm#50 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Frameworks Quagmire Date: 03 Mar 2022 Blog: Facebookre:
In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. The first time I tried to do it through plant site employee education and at first they agreed, but as I provided more information (including prevailing in competitive situations), they changed their mind. They said that IBM spends a great deal of effort training managers in how to deal with employees and it wouldn't be in IBM's best interest to expose general employees to Boyd; I should limit the audience to senior members of competitive analysis departments. The first briefing was in San Jose Research auditorium, open to all.
Boyd's OODA-loop
https://en.wikipedia.org/wiki/OODA_loop
has found its way into some number of "agile" business processes.
https://medium.com/on-track/defining-an-agile-delivery-plan-with-the-OODA-loop-c723b21b4f1c
https://www.linkedin.com/pulse/business-agility-OODA-loop-adam-knight/
https://opexsociety.org/body-of-knowledge/OODA-and-agility-reaching-a-conclusion-faster/
https://waydev.co/OODA-agile-data-driven/
https://www.sandordargo.com/blog/2021/08/25/ooda_loop_decision_making
trivia: In 89/90, the Commandant of the Marine Corps leverages Boyd
for a make-over of the corps ... at a time when IBM was desperately in
need of a make-over ... heading into the red and was being reorg'ed
into the 13 "baby blues" in preparation for breaking up the company,
.... reference gone behind paywall but mostly lives free at wayback
machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
Boyd posts & URL refs:
https://www.garlic.com/~lynn/subboyd.html
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Link FEC and Encryption Date: 04 Mar 2022 Blog: Facebookre:
... and when doing the (interdata-based) 360 telecommunication
clone ... also added 2741 support to HASP ... along with implementing
the (CP67/)CMS editor syntax ... I thot was much better (C)RJE than
IBM's ... discussed in this old post tracking down the decision to
make all 370s "virtual memory"
https://www.garlic.com/~lynn/2011d.html#73
... i.e. MVT storage management was so bad, regions had to be four times larger than actually used, meaning typical 1mbyte 370/165 only had four regions ... not enough to keep the processor reasonably utilized. Going to 16mbyte virtual memoy could increase the number of regions by a factor of four times with little or no paging.
HASP/JES posts
https://www.garlic.com/~lynn/submain.html#hasp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 04 Mar 2022 Blog: Facebookagain, recent channel i/o
360s had relative more system I/O capacity than real storage ... and CKD used a trade-off of multi-track searches (like PDS directory lookup) rather than keeping info cached in real storage. By mid-70s, that trade-off was starting to switch and ... there was relative more system memory than I/O capacity. In the early 80s, I wrote a memo that the relative system throughput of disks had declined by an order of magnitude ... i.e. systems had gotten 40-50 times faster while disks only got 3-5 times faster. A GPD/disk division executive took offense and directed the division performance group to refute my claims ... but after a couple weeks came back and said I had slightly understated the claims. Example was CP/67 with 80 users going to VM370 and 3081k should have had 4000 users ... but typically only had 300-400 users (i.e. proportional to disk access improvement not system speed). The group then respun their analysis to configuring disks for improved system throughput, presentation B874 given at 16Aug1984, SHARE 63. Recently there is claim that real memory is the new disk, if memory access latency (on cache miss) is measured in count of CPU cycles, the count is similar to 60s disk access latency when measured in count of 60s processor cycles.
some recent posts mentioning "B874"
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
Other (80s) trivia: IBM channel protocol was half-duplex with lots of end-to-end protocol chatter ... as transfer speed increased, the end-to-end half duplex protocol chatter started to dominate throughput. The (mid-80s) went from 3330 800kbyte/sec transfer to 3380 3mbyte/sec transfer. The 3090 people originally configured the number of channels based 3380 3mbyte transfer and assuming the 3880 controller was same as the (3330) 3830 controller was at least as fast. However, the 3880 controller supported 3mbyte transfer, but the microprocessor was significantly slower than the 3830 microprocessor ... which significantly drove up the channel busy for the end-to-end protocol chatter. When the 3090 processor people realized that they had to significantly increase the number of channels (to offset the significant increase in channel busy), the increase in number of channels required an extra TCM. The 3090 people semi-facetiously said that they would bill the 3880 controller group for the increase in 3090 manufacturing costs (for the extra TCM). Marketing eventually respun the great increase in number of 3090 channels as it being a great I/O machine ... when it actually was to offset the latency overhead of the 3880 end-to-end protocol chatter.
posts referencing getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
More (80s) trivia: In 1980, IBM STL (bldg) was bursting at the seams and they were moving 300 people from the IMS group to offsite bldg with dataprocessing support back to STL datacenter. They had tried "remote" 3270 support ... but found the response and human factors unacceptable (especially compared to what they were use to in STL, this was period when articles were starting to improve about the productivity improvements with sub-second response). I get con'ed into doing channel-extender support, placing channel attach 3270 controllers at the offsite bldg and the IMS people not seeing any difference in human factors remotely and inside STL. Part of the support was channel emulator at the offsite bldg, downloading channel programs to the offsite channel emulator ... but running the channel-extender full-duplex transfer (w/o the end-to-end half-duplex channel protocol chatter only running between the offsite bldg and STL datacenter). The hardware vendor then tries to get IBM to release my support, but there is group in POK that is playing with some serial stuff get it vetoed (they were afraid that if my support was in the market, it would make it harder to justify releasing their support).
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
Then in 1988, LLNL (national lab) is playing with some serial stuff, and the local IBM branch asks if I could help LLNL get it standardized, which quickly becomes fibre channel standard (including some of the stuff I had done in 1980). Finally the POK people get their stuff released in 1990, with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec ... FCS starts out 1gbit/sec, full duplex, 2gbit/sec aggregate, 200mbyte/sec).
Then some POK engineers become involved in FCS, defining a heavy weight protocol that significantly cuts throughput, which eventually ships as "FICON". The most recent published benchmark is for "peak I/O" for max configured z196 (80 processors, 50BIPS) getting 2M IOPS using 104 FICON (running over 104 FCS). Approx. the same time a FCS is announced for E5-2600 blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS).
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
Other trivia, in late 70s when I transfer to IBM San Jose Research (bldg28, on main disk plant site, before new Almaden bldg was built up the hill), I get to wander around both IBM and non-IBM datacenters in silicon valley ... including disk engineering (bldg14) and product test (bldg15) across the street. They were running prescheduled, 7x24, stand alone machine testing ... they said they had recently tried MVS ... but it had 15min mean-time-between-failure in that environment (requiring manual re-ipl). I offer to rewrite I/O supervisor, making bullet-proof and never fail, allowing any amount of concurrent, on-demand testing ... greatly improving productivity. Also product test tended to get very early engineering processor machines (for disk channel testing) and when they got early (#3 or #4) engineering 3033, we found a couple strings of 3330s and 3830 controller to set up private online service (since disk i/o testing only took percent or two of processing).
One monday morning, I get an irate call from product test asking what I had done to the 3033 system over the weekend because response and throughput degraded enormously. I said nothing. Eventually find that somebody had replaced 3830 controller with 3880 controller. Turns out 3830 had fast horizontal microprogrammed processor ... the 3880 had a very slow vertical microprogrammed jib-prime processor (although it had fast datastreaming hardware data transfer bypass that would handle up to 3mbyte/sec, everything else was enormously slower). They had initially tried faking it by 3880 signaling channel program end interrupt to the processor early and then figuring they could actually finish the controller processing while the system was fiddling around handling the interrupt. Some early testing with MVS went fine ... however my rewrite of the I/O system had 1/10th to 1/20th the pathlength of MVS I/O processing (while at the same time having much higher reliability and integrity) and would try to start the next operation while the 3880 controller was still busy (even though it had signaled operation complete). I would claim that part of 370/xa I/O features was moving really bad MVS I/O pathlength to hardware (more recently are the "Z system assist, SAPs" dedicated I/O processors).
I wrote up a (internal) research report on the work for bldg14&15 and happened to mention MVS 15min MTBF ... which brings the wrath of the MVS organization down on my head (they hated anybody tarnishing their carefully managed image, even if it was purely internal). So when 3380 was about to ship ... it didn't bother me that FE had test case of 57 simulated errors that would be expected in live operation ... and MVS was (still) failing in all 57 cases (requiring re-IPL) ... and in 2/3rds of the cases there was no indication of what caused the failure.
some posts referencing "MVS wrath"
https://www.garlic.com/~lynn/2022b.html#70 IBM 3380 disks
https://www.garlic.com/~lynn/2022b.html#17 Channel I/O
https://www.garlic.com/~lynn/2022b.html#7 USENET still around
https://www.garlic.com/~lynn/2022b.html#5 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2022b.html#0 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#42 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#35 Error Handling
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 04 Mar 2022 Blog: Facebookmost recent
Re: writing memos; In the late 70s and early 80s, I was also blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s, non-SNA, technology done by co-worker at science center, also used for corporate sponsored univ BITNET). It really took off spring of 1981 after I distributed a trip report of visit to Jim Gray at Tandem (he had left IBM Research, original SQL/Relational System/R, other DBMS the fall of 1980, foisting off some amount on me).
RDBMS & System/r posts
https://www.garlic.com/~lynn/submain.html#systemr
There was only about 300 people actively participating, but claims
that up to 25,000 were reading. from IBM JARGON:
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
We then print six copies of some 300 pages, executive summary, and
summary of the summary and package in Tandem 3-ring binders and send
to the corporate executive committee (folklore is 5of6 wanted to fire
me, possibly one of the inhibitors was that one of my hobbies after
joining IBM was enhanced production operating systems for internal
datacenters, including the world-wide sales&marketing support "HONE"
systems, lots of datacenters not just bldg14&15). From summary of
summary:
• The perception of many technical people in IBM is that the company is
rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to
affect revenue, at which point recovery may be impossible
• Many technical people are extremely frustrated with their management
and with the way things are going in IBM. To an increasing extent,
people are reacting to this by leaving IBM Most of the contributors to
the present discussion would prefer to stay with IBM and see the
problems rectified. However, there is increasing skepticism that
correction is possible or likely, given the apparent lack of
commitment by management to take action
• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive environment.
... snip ...
... took another decade (1981-1992) ... IBM had gone into the red and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company .... reference gone behind paywall but mostly
lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup).
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 04 Mar 2022 Blog: Facebookmost recent
trivia: fiber/serial full-duplex (actually "dual-simplex" simulating full-duplex) had pairs dedicated to transmission in one direction.
In the early 80s, also started HSDT project (T1 and faster computer links), was working with director of NSF and was suppose to get $20M to interconnect the NSF supercomputer centers. Was also doing some FEC work with Cyclotomics up at Berkeley (founded by Berlekamp, later bought by Kodak) on 15/16s Reed-Solomon forward error correcting. Trivia, Cyclotomics also provided the encoding standard for CDROMs.
Then congress cuts the budget, some other things happen and eventually
an RFP is released (in part based on what we already had
running). Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP. the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs
ahead of the winning bid, RFP awarded 24Nov87). The winning bid doesn't even install T1
links called for ... they are 440kbit/sec links ... but apparently to
make it look like its meeting the requirements, they install telco
multiplexors with T1 trunks (running multiple links/trunk). We
periodically ridicule them that why don't they call it a T5 network
(because some of those T1 trunks would in turn be multiplexed over T3
or even T5 trunks). as regional networks connect in, it becomes the
NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
First part of 80s (somewhat preparing for interconnecting NSF
supercomputer centers) ... had T1 satellite link between IBM Los Gatos
and Clementi
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (this was different and not related to the 80s
IBM Kingston's "supercomputer" effort). Clementi's lab would have
boatload of FPS boxes (with 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Channel I/O Date: 04 Mar 2022 Blog: Facebookmost recent
Late 80s and early 90s there was lots of discussions of "no buffer copy" I/O (aka os/360 had get/put with buffer copy and read/write asynchronous using buffers in application memory with no buffer copy). However there was comparison of VTAM LU6.2 with NFS & TCP/IP. VTAM LU6.2 had total of 160k instruction pathlength compared to 5k instruction for NFS TCP/IP. However UNIX was down to five buffer copies and lots of work being down to do TCP/IP right out of user memory (with no buffer copy). It turns out that VTAM LU6.2 had so many buffer copies, that the total processor time for the buffer copies exceeded the processor time for the 160k instructions.
disclaimer: I was on the XTP technical advisory board (IBM
communication group had fiercely fought to block), started by Greg
Chesson at SGI ... and was working on "fast" TCP that could
scatter/gather pipelined I/O out the LAN interface with zero buffer
copies.
https://en.wikipedia.org/wiki/Xpress_Transport_Protocol
... above XTP reference states no congestion control, but I wrote dynamic adaptive rate-based protocol into the specification, that we had been using in HSDT for congestion control. Trivia: 1988 ACM SIGCOMM article showed how (old-fashion) "windowing" congestion control was non-stable in large heterogeneous multi-hop network (like internet, including returning ACKs tended to clump along the way resulting in multi-packet window burst opening when ACK-bunch arrived).
XTP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
The IBM communication group had been (also) fiercely fighting off client/server and distributed computing, including attempting to block release of mainframe TCP/IP support. When the lost the battle for releasing mainframe TCP/IP, they changed their tactic and said that since they had corporate strategic ownership of everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I did the support for RFC1044 and in some tuning tests at Cray Research between IBM 4341 and Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
Later the communication group contracted with somebody in silicon valley to do TCP/IP support directly in VTAM. What he demo'ed had TCP/IP running much faster than LU6.2. He was then told that everybody "knows" that a *PROPER* TCP/IP implementation runs much slower than LU6.2 *AND* they would only be paying for a *PROPER* implementation.
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: SUSE Reviving Usenet Newsgroups: alt.folklore.computers Date: Fri, 04 Mar 2022 13:35:43 -1000Mike Spencer <mds@bogus.nodomain.nowhere> writes:
co-worker at cambridge science center was responsible for what becomes
the ibm internal network (larger than arpanet/internet from just
about the beginning until sometime mid/late 80s) ... was also used
for the corporate sponsored BITNET:
https://en.wikipedia.org/wiki/BITNET
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
77, we transfer out to IBM San Jose Research and in fall of 1982 get
a csnet gateway (NSF funds CSNET, later merges with BITNET)
https://en.wikipedia.org/wiki/CSNET
CSNET was a forerunner of the National Science Foundation Network
(NSFNet) which eventually became a backbone of the Internet. CSNET
operated autonomously until 1989, when it merged with Bitnet to form the
Corporation for Research and Educational Networking (CREN). By 1991, the
success of the NSFNET and NSF-sponsored regional networks had rendered
the CSNET services redundant, and the CSNET network was shut down in
October 1991.[9]
... snip ...
1998 afc post with SJR CSNET gateway announce
https://www.garlic.com/~lynn/98.html#email821022
2000 afc posts with CSNET email about ARPANET moving off NCP to TCP/IP
https://www.garlic.com/~lynn/2000e.html#email821230
and CSNET email about transition had more than few problems
https://www.garlic.com/~lynn/2000e.html#email830212
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Edson was responsible for the internal network
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMerc article about Edson (he recently passed aug2020) and "IBM'S
MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives
free at wayback machine) SJMerc article
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some references off Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Starting early 80s, I had HSDT project (T1 and faster computer links)
and working with the NSF director, was suppose to get $20M to
interconnect the NSF supercomputer centers ... then congress cuts the
budget, some other things happen, and finally an RFP is released (based
in part on what we already had running). Copy of Preliminary announce
(28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program
to provide Supercomputer cycles; the New Technologies Program to foster
new supercomputer software and hardware developments; and the Networking
Program to build a National Supercomputer Access Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP. the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs ahead
of the winning bid). The winning bid doesn't even install T1 links
called for ... they are 440kbit/sec links ... but apparently to make it
look like its meeting the requirements, they install telco multiplexors
with T1 trunks (running multiple links/trunk). We periodically ridicule
them that why don't they call it a T5 network (because some of those T1
trunks would in turn be multiplexed over T3 or even T5 trunks). as
regional networks connect in, it becomes the NSFNET backbone, precursor
to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Frameworks Quagmire Date: 04 Mar 2022 Blog: Facebookre:
a lot more detail on Frameworks Quagmire
https://web.archive.org/web/20001018151708/http://www.software.org/quagmire/frampapr/frampapr.html
Types of Compliance Frameworks
The first step toward making sense of the Quagmire is to categorize
the frameworks by purpose. One or more of the six categories in Table
1 apply to most of the frameworks.
1. Standards and Guidelines
2. Process Improvement (PI) Models and Internal Appraisal Methods
3. Contractor Selection Vehicles
4. Quality Awards
5. Software Engineering Life-Cycle Models
6. Systems Engineering Models
... snip ...
also 2167a
https://en.wikipedia.org/wiki/DOD-STD-2167A
498
https://en.wikipedia.org/wiki/MIL-STD-498
12207
https://en.wikipedia.org/wiki/IEEE_12207
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 3380 disks Date: 04 Mar 2022 Blog: Facebookre:
internally inside IBM had data activity monitor which was fed into 3330/3350 -> 3380 move application .... attempting to load balance across 3380 drives. Part of the issue was increase in bytes on 3380 increased significantly more than 3380 performance increase. Without actual activity data, approx. rule-of-thumb was 3330->3380 (original 3380, before "E" and "K") was to only load 3380 to 80% capacity. Take avg. 3330 avg access to data (3330 avg access to 4k bytes divided by number of 3330 mbytes, gives avg access per mbyte) ... take avg 3380 avg access to data (3380 avg access to 4kbytes divided by number of 3380 mbytes). Limit the amount of mbytes on 3380 so the 3380 accesses/mbyts doesn't drop below the 3330 accesses/mbyte.
Got into dustup with the 3880 cache controller guys in Tucson ... for ironwood/sherriff (8mbyte cache, 4k record cache 3880-11, and full track 3880-13).
1) 3880-13 pub claimed 90% hit rate ... i.e. sequential read of 4k records, 10 records/track, first record, was miss and read full record, then "hits" for the next 9 sequential records. However if app went to full track buffer reads, hit rate would go from 90% hit rate to 0% hit rate.
2) 3880-11 ironwood cache for 4k paging ... in lieu of IBM having newer paging devices, were being connected to 32mbyte 3081 systems, I showed that when 3081 needed to read a page, it would also be in both 3880-11 cache and 3081 memory ... and since 3081 memory was so much larger ... the "duplicate" in the 3880-11 cache would never be needed. As 3081 read other pages, the earlier 4k records in the 3880-11 would be replaced (needed what I called "no-dup" strategy always reading 4k pages with cache bypass CCW).
some of this was partially mitigated when caches were increased to 32mbytes for 3880-21 and 3880-23
Earlier we had modified several systems in the San Jose area with a highly efficient disk activity monitor (every CCHHR for every disk I/O) and used the activity to feed a I/O cache simulator ... that had large variety of different kind and sizes of cache implementations.
past posts mentioning DMKCOL that collected detailed disk
activity data (also used for disk cache simulators)
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: David Boggs, Co-Inventor of Ethernet, Dies at 71 Date: 05 Mar 2022 Blog: Facebookre:
IBM Dallas E&S center published a comparison mid-80s and the only thing I could imagine was that they used prototype 3mbit ethernet before listen before transmit standard. New IBM Almaden Research Center was heavily provisioned with CAT4 assuming 16mbit token ring ... but found 10mbit Ethernet (over CAT4) had higher card throughput ($69 compared to $800 16mbit T/R card), higher aggregate LAN throughput and lower LAN latency.
1988 ACM SIGCOMM study found that 30 station CAT4 10mbit Ethernet had 8.5mbit effective throughput that dropped off to 8mbit (effective throughput) when all 30 stations were running low-level device driver loop, constantly transmitting minimum sized packets.
About the same time, my wife had written response to gov. agency RFI for highly secure, large "campus" environment that included 3-tier networking, ethernet, some other stuff ... and then we were out giving details to other customers in executive presentations. This was at a time when the communication group was fiercely fighting off (2-tier) client/server and distributed computing. We found that they were also attacking us with enormous amounts of misinformation.
Trivia: my wife is co-inventor on 70s IBM patent for token-passing LAN network (predating token-ring product).
probability of collisions further reduced with full-duplex (CAT4 or fiber) operation introduced in 1997 standard ... (aka simultaneously transmit and receive w/o collision).
Ethernet
https://en.wikipedia.org/wiki/Ethernet
The original 10BASE5 Ethernet uses coaxial cable as a shared medium,
while the newer Ethernet variants use twisted pair and fiber optic
links in conjunction with switches. Over the course of its history,
Ethernet data transfer rates have been increased from the original
2.94 Mbit/s[2] to the latest 400 Gbit/s, with rates up to 1.6 Tbit/s
under development. The Ethernet standards include several wiring and
signaling variants of the OSI physical layer.
Token Ring
https://en.wikipedia.org/wiki/Token_Ring
In 1988 the faster 16 Mbit/s Token Ring was standardized by the 802.5
working group.[9] An increase to 100 Mbit/s was standardized and
marketed during the wane of Token Ring's existence and was never
widely used.[10] While a 1000 Mbit/s standard was approved in 2001, no
products were ever brought to market and standards activity came to a
standstill[11] as Fast Ethernet and Gigabit Ethernet dominated the
local area networking market
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
other posts mentioning 88 acm sigcomm articles
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017k.html#18 THE IBM PC THAT BROKE IBM
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017d.html#28 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015d.html#41 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#128 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2013m.html#30 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013m.html#18 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013i.html#83 Metcalfe's Law: How Ethernet Beat IBM and Changed the World
https://www.garlic.com/~lynn/2013b.html#32 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2011d.html#41 Is email dead? What do you think?
https://www.garlic.com/~lynn/2009m.html#83 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009m.html#80 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2007g.html#80 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
https://www.garlic.com/~lynn/2005q.html#18 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2004e.html#17 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2002q.html#41 ibm time machine in new york times?
https://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2000f.html#39 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: David Boggs, Co-Inventor of Ethernet, Dies at 71 Date: 05 Mar 2022 Blog: Facebookre:
Previously mentioned post/comments that one of the AWD engineers took (mainframe) ESCON, tweaked to make it full-duplex and faster ... for RS/6000 "SLA" (200+mbit, 400+mbit aggregate, before FCS was available), which didn't interoperate with anybody.
There were high-performance several hundred mbit backplane TCP/IP routers for $30k-$40k, with 16 ethernet lan interfaces, T1 & T3 telco interfaces, and mainframe channel interfaces, we talk vendor into adding SLA, enabling higher-performance RS/6000 SLA servers that interoperate with large client environment). The engineer then wanted to do 800mbit SLA ... but we talk him into working on fibre channel standard instead (started at 1gbit/sec full-duplex, 2gbit/sec aggregate).
In our late 80s, 3-tier customer executive presentation, we did comparison of 300 station T/R @$800 (communication group severely performance kneecapped with lower per card throughput than the PC/RT 4mbit T/R card) or $240,000 for a few megabit aggregate lan (not only kneecapped per card throughput but almaden also found aggregate 16mbit t/r lan throughput less than 10mbit-ethernet at 8.5mbit aggregate) or something like avg of 5mbit/300stations (16kbit/station) or $240,000 for 5mbit aggregate ($48k/mbit).
This was compared to 300 high-performance ethernet stations @$69 or $20,700 plus a $40K high-performance TCP/IP router ($60K aggregate) with IBM mainframe channel interface, IBM SLA RS/6000 interface, T1 (or T3) telco interface and 16 ethernet LANs, spreading 300 stations across 16 LANs or 20/LAN aka avg 8.5mbit/20 (425kbit/station).
Or $60K gets 300 high-performance 10mbit ethernet cards and super high-performance TCP/IP router supporting 16 10m-ethernet lans (each with 8.5mbit throughput or 136mbits aggregate), IBM channel interface, RS/6000 SLA interface, and high-speed telco interface ... at 1/4th the cost of 300 (heavily performance kneecapped) 16mbit T/R cards.
3-tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
I've also mentioned that mainframe tcp/ip support was restricted
... in part because it was forced to use a "bridge" controller
channel-attached box ... which met the mainframe code had to do all
the low level LAN support. I've periodically mentioned that I did the
changes to support mainframe channel-attached high-speed router box
(and not having to execute low-level LAN bridge support help account
for getting 500 times improvement in bytes moved per mainframe
instruction executed, in tuning tests at Cray Research between IBM
4341 and Cray). posts mentioning rfc1044
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM "Green Card" Date: 06 Mar 2022 Blog: Facebookthere was a "green card" done in CMS IOS3270. I've since did a quick&dirty conversion to HTML
... from the original:
The GCARD exec provides an interactive display of information
extracted from the 'Green card' - GX20-1850-3 (Fourth Edition,
November 1976). If GCARD is called with no parameters then a
selection menu is displayed.
... snip ...
I had provided the device "sense bytes" section (originally taken from the 360/67 "blue card"). Even tho original says "1976", it had been updated with 370/XA info (dated 17Feb86)
then there is online green card collection
http://planetmvs.com/greencard/
bitsavers 370 reference cards
http://www.bitsavers.org/pdf/ibm/370/referenceCard/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: High-Performance Mobile System-on-Chip Clusters Date: 06 Mar 2022 Blog: FacebookHigh-Performance Mobile System-on-Chip Clusters
... not that we weren't doing our HA/CMP product in the late 80s and
early 90s
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
... it had started out HA/6000, for the NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000. I renamed it HA/CMP (High Availability Cluster Multi-Processing) when I started doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Informix, Ingres, Sybase) that had VAXcluster support in the same source base with UNIX. Jan1992 meeting with Oracle CEO was plan to have 16 system HA/CMP by mid1992 and 128 system HA/CMP by YE1992.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer BUNCH Date: 06 Mar 2022 Blog: FacebookComputer BUNCH - Burroughs/UNIVAC/NCR/Control Data Corp/Honeywell
Former co-worker said his father was economist in the gov/ibm legal actions and claimed all members of the BUNCH testified that everybody knew that by the late 50s, that the single most important market requirement was having a compatible computing line (i.e. computer use was experiencing rapid growth and businesses were having to frequently upgrade to more powerful computer, and "incompatible" was inhibitor to those upgrades) and for whatever reason, IBM executives were the only ones that managed to force lab directors to toe the line for the market's most important (compatibility) requirement.
Amdahl played major role in 360s. He was then doing ACS/360
... executives then kill the project because they were afraid that it
would advance state-of-the too fast and IBM would loose control of the
market (following also lists some of the ACS/360 features that show up
in es/9000 more than 20yrs later)
https://people.cs.clemson.edu/~mark/acs_end.html
Amdahl leaves IBM shortly after ACS/360 shutdown.
During the late 60s, IBM was being plagued with plug-compatible
controllers, as countermeasure in the early 70s, IBM starts the
"Future System" project to completely replace 370s, so complex that
compatible makers couldn't keep up. During FS, internal politics were
killing off 370 efforts and the lack of new IBM 370 products is
credited with giving compatible 370 system makers their market
foothold (i.e. from the law of unintended circumstances, FS as counter
measure to compatible controllers give rise to compatible 370
systems). When FS implodes, there is mad rush to get stuff back into
the 370 product pipeline, including kicking off the quick&dirty
3033&3081 efforts in parallel ... some more detail
http://www.jfsowa.com/computer/memo125.htm
... also, Ferguson & Morris, "Computer Wars: The Post-IBM World", Time
Books, 1993,
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
FS reference:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive.
... snip ...
... note one of the last nails in the FS coffin was analysis by the Houston Science Center, if 370/195 applications were moved to FS machine made out of the fastest available technology, they would have throughput of 370/145 (about 30 times slowdown). Note: IBM Rochester does a vastly simplified FS for the S/38 ... but there was significant technology headroom for the low-end S/38 market.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
plug-compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
... after joining IBM I was allowed to continue going to user group meetings (like SHARE) and also visiting customers. Director of one of the largest financial datacenters on the east coast like me to stop by and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl system (lonely Amdahl in vast sea of IBM systems). Up until then Amdahl had been selling into the technical/scientific/univ. market, but this would be the first for the "true blue", commercial market. I was asked to spend 6-12months onsite at the customer (to help obfuscate why they ordered an Amdahl system). I talk it over with the customer, and then decline. I was then told that the branch manager was good sailing buddy of IBM CEO, and if I didn't do it, I could forget career, promotions, raises. Now since I was already periodically ridiculing FS ... and being told there wouldn't be any raises ... didn't seem to make a lot difference.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer BUNCH Date: 06 Mar 2022 Blog: Facebookre:
... plug-compatible controller trivia: within a year of 2credit intro to fortran/computers, univ hires me fulltime responsible for os/360. Univ. had been sold 360/67 for tss/360 to replace 709/1401. TSS/360 never came to production fruition, so univ ran as 360/65 w/os360. Univ. shutdown datacenter and I would have the whole place to myself, although 48hrs w/o sleep could make monday morning classes a little hard. student jobs on 709 (tape->tape) ran less than second ... initially under OS/360, they ran over a minute. I installed HASP and cut time in half. I then started punching OS/360 stage-2 sysgen, interpreting the cards, reorganizing whole stage2 sysgen placing datasets and pds members to optimize arm seek and PDS directory multi-track search .... cutting it another 2/3rds to 12.9secs. student jobs never beat 709 until I installed WATFOR. Along the way, I got stage2 sysgen so it would mostly run in (current) production jobstream (w/HASP).
some people from science center came out and installed (virtual machine) CP67 (precursor to vm370) at the univ (3rd after csc itself and mit lincoln labs) and would get to play with it during my weekend dedicated time ... rewrtting whole bunches of the code. original CP67 had support for 1052 & 2741 with automagic terminal type recognition (and used terminal controller SAD CCW to change terminal type scanning for each port). The univ. had some number of ASCII terminals and so I added ASCII support (extending automagic terminal type for ASCII). Trivia: when box arrived for CE to install tty/ascii port scanner in the 360 terminal controller, the box was labeled "Heathkit".
I then wanted to have single dial-in number for all terminals ... hunt
group
https://en.wikipedia.org/wiki/Line_hunting
Didn't quite work since I could switch line scanner for each port (on
IBM telecommunication controller), IBM had took short cut and hard
wired line speed for each port (TTY was different line speed from
2741&1052). Thus was born univ. project to do a clone controller,
built a mainframe channel interface board for Interdata/3 programmed
to emulate mainframe telecommunication controller with the addition it
could also do dynamic line speed determination. Later it was enhanced
with Interdata/4 for the channel interface and cluster of Interdata/3s
for the port interfaces. Interdata (and later Perkin/Elmer) sell it
commercially as IBM clone controller. Four of us at the univ. get
written up responsible for (some part of the) clone controller
business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
plub compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer BUNCH Date: 06 Mar 2022 Blog: Facebookre:
Re: writing memos; In the late 70s and early 80s, I was also blamed
for online computer conferencing on the internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, non-SNA, technology done by co-worker at science center, also
used for corporate sponsored univ BITNET). It really took off spring
of 1981 after I distributed a trip report of visit to Jim Gray at
Tandem (he had left IBM Research, original SQL/Relational System/R,
other DBMS the fall of 1980, foisting off some amount on me). There
was only about 300 people actively participating, but claims that up
to 25,000 were reading. from IBM JARGON:
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
We then print six copies of some 300 pages, executive summary, and
summary of the summary and package in Tandem 3-ring binders and send
to the corporate executive committee (folklore is 5of6 wanted to fire
me, possibly one of the inhibitors was that one of my hobbies after
joining IBM was enhanced production operating systems for internal
datacenters, including the world-wide sales&marketing support "HONE"
systems, lots of datacenters not just bldg14&15). From summary of
summary:
• The perception of many technical people in IBM is that the company is
rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to
affect revenue, at which point recovery may be impossible
• Many technical people are extremely frustrated with their management
and with the way things are going in IBM. To an increasing extent,
people are reacting to this by leaving IBM Most of the contributors to
the present discussion would prefer to stay with IBM and see the
problems rectified. However, there is increasing skepticism that
correction is possible or likely, given the apparent lack of
commitment by management to take action
• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive environment.
... snip ...
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
... took another decade (1981-1992) ... IBM had gone into the red and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company .... reference gone behind paywall but mostly
lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup).
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer BUNCH Date: 06 Mar 2022 Blog: Facebookre:
... other 360 history ... the biggest computer goof ever ... by IBMer
responsible for ASCII (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
The culprit was T. Vincent Learson. The only thing for his defense is
that he had no idea of what he had done. It was when he was an IBM
Vice President, prior to tenure as Chairman of the Board, those lofty
positions where you believe that, if you order it done, it actually
will be done. I've mentioned this fiasco elsewhere.
... snip ...
... aka the ascii unit record gear wouldn't be ready for the 360
announcement ... so they adapted the BCD gear (supposedly just
temporarily). More ASCII history
https://web.archive.org/web/20180402200104/http://www.bobbemer.com/ASCII.HTM
https://web.archive.org/web/20180402195951/http://www.bobbemer.com/BACSLASH.HTM
https://web.archive.org/web/20180402194530/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180402200149/http://www.bobbemer.com/HISTORY.HTM
8-bit standard
https://web.archive.org/web/20180402195956/http://www.bobbemer.com/BYTE.HTM
some past posts referring to Bob Bemer:
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021d.html#92 EBCDIC Trivia
https://www.garlic.com/~lynn/2020.html#7 IBM timesharing terminal--offline preparation?
https://www.garlic.com/~lynn/2019b.html#39 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2018f.html#58 So much for THAT excuse | Computerworld SHARK TANK
https://www.garlic.com/~lynn/2018f.html#42 SCP of file to USS from Mac is corrupted
https://www.garlic.com/~lynn/2018e.html#63 EBCDIC Bad History
https://www.garlic.com/~lynn/2018d.html#15 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018b.html#77 Nostalgia
https://www.garlic.com/~lynn/2018b.html#75 Nostalgia
https://www.garlic.com/~lynn/2017g.html#109 Online Terminals
https://www.garlic.com/~lynn/2017g.html#5 RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
https://www.garlic.com/~lynn/2016h.html#79 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#71 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#70 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#64 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016e.html#0 Is it a lost cause?
https://www.garlic.com/~lynn/2016b.html#47 ASCII vs. EBCDIC (was Re: On sort options ...)
https://www.garlic.com/~lynn/2015e.html#6 New Line vs. Line Feed
https://www.garlic.com/~lynn/2015.html#65 16-bit minis, was Floating point
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer BUNCH Date: 06 Mar 2022 Blog: Facebookre:
other 370 history, decade ago, I was asked if I could track down the
decision to make all 370s virtual memory ... decade old archived post
with pieces from the reply
https://www.garlic.com/~lynn/2011d.html#73
basically, MVT storage management was so bad that regions had to be
four times larger than actually used, so a typical 1mbyte 370/165
didn't have enough tasks to keep processor justified. Remaping MVT
into 16mbyte virtual memory would allow increasing number of regions
by factor of four times with little or no paging ... better justifying
370/165.
Started out with little more than running MVT in a 16mbyte virtual machine ... with a little bit of code moved into the MVT kernel (OS/VS2-SVS), creating the 16mbyte virtual address space table, handling the few page faults, and doing page I/O was relatively trivial. The biggest part was the channel programs, now channel programs that were passed to EXCP/SVC0 had virtual addresses and channels required real addresses. Initially virtual machine CP67 "CCWTRANS" (that made copies of virtual machine channel programs, replacing virtual addresses with real addresses) was crafted into EXCP/SVC0.
370 virtual memory trivia: The full 370 architecture was in CMS "SCRIPT" files, command line argument produced either the full 370 architecture (distributed in "red" 3-ring binders and referred to as the "red book") or the "370 Principles of Operation" subset (difference being lots of engineering notes, consideration of alternatives, justifications, etc). The 370/165-II was falling behind implementation schedule and asked for 370 virtual memory features to be dropped in order to gain back 6months in the implementation (and make the scheduled announcement). Eventually it was agreed, but the other models that had already implemented the full architecture (and any software that used it), would have to be redone to match the 165-II subset.
other recent posts referencing decision to make all 370s virtual
memory
https://www.garlic.com/~lynn/2022b.html#76 Link FEC and Encryption
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#58 Computer Security
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#82 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021g.html#43 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021g.html#25 Execute and IBM history, not Sequencer vs microcode
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021e.html#32 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021d.html#53 IMS Stories
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Color 3279 Date: 06 Mar 2022 Blog: Facebookthere was 81 x-mas greeting (rex exec by the rexx author), if executed on 3279, it used FSX to display xmas tree with blink colored lights ... I've tried to do an HTML approx. of the equivalent (archived old post)
NOTE: as mentioned in the above, this was NOT the 1987 worm released
on BITNET, from VMSHARE archive
http://vm.marist.edu/~vmshare/browse.cgi?fn=CHRISTMA&ft=PROB
aka TYMSHARE offered their CMS-based online computer conferencing
system free to SHARE starting AUG1976 ...
http://vm.marist.edu/~vmshare/
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
posts referencing the post with html of the 3279 xmas tree:
https://www.garlic.com/~lynn/2021k.html#98 BITNET XMAS EXEC
https://www.garlic.com/~lynn/2021d.html#48 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#3 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#69 Fumble Finger Distribution list
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019b.html#54 Misinformation: anti-vaccine bullshit
https://www.garlic.com/~lynn/2018.html#21 IBM Profs
https://www.garlic.com/~lynn/2015b.html#43 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2013i.html#23 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2012d.html#49 Do you know where all your sensitive data is located?
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011o.html#15 John R. Opel, RIP
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2011i.html#6 Robert Morris, man who helped develop Unix, dies at 78
https://www.garlic.com/~lynn/2011i.html#4 Robert Morris, man who helped develop Unix, dies at 78
https://www.garlic.com/~lynn/2011e.html#57 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011c.html#81 A History of VM Performance
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009r.html#58 xmas card
https://www.garlic.com/~lynn/2008d.html#58 Linux zSeries questions
https://www.garlic.com/~lynn/2007v.html#63 An old fashioned Christmas
https://www.garlic.com/~lynn/2007v.html#56 An old fashioned Christmas
https://www.garlic.com/~lynn/2007v.html#55 An old fashioned Christmas
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer BUNCH Date: 06 Mar 2022 Blog: Facebookre:
The quick&dirty 3081 effort (I mention in comments else where in this
thread) ... more here
http://www.jfsowa.com/computer/memo125.htm
One of the issues was that 308x initially was supposed to be multiprocessor only ... however there were a number of customers running software that didn't have multiprocessor suppoort ... for instance ACP/TPF operating system didn't have multiprocessor support ... IBM was concerned these whole markets would move to Amdahl (latest single processor). The initial two-processor 3081D had less processing power than the latest Amdahl single processor. IBM then doubled the 3081D cache sizes for the 3081K ... claiming that it was about (nearly) the same processing power as the latest Amdahl single processor machine. The later IBM four processor 3084 was also less processing power than Amdahl's two processor machines.
Along the way, there were special tweaks to VM370 specifically to improve the performance of ACP/TPF running in single processor virtual machine on 3081 ... however it also created something like 10-15% degradation for all the other VM370 multiprocessor machine customers. They then did some tweaks to 3270 terminal response in attempt to mask the 10-15% throughput degradation. However, one of the long-time virtual machine gov. agency customers (back to CP67 in the 60s) was all ASCII "glass teletypes" (not 3270) ... so the 3270 terminal tweaks had no affect and they also had to live with the 10-15% multiprocessor throughput degradation.
I then get email asking if I could do anything to help the customer
(other than undoing the throughput degradation tweaks for ACP/TPF
single processor virtual machines). various old email references
https://www.garlic.com/~lynn/2007.html#email801006b
https://www.garlic.com/~lynn/2007.html#email801008b
https://www.garlic.com/~lynn/2007.html#email820512
https://www.garlic.com/~lynn/2007.html#email820512b
https://www.garlic.com/~lynn/2007.html#email820513
https://www.garlic.com/~lynn/2001f.html#email830420
https://www.garlic.com/~lynn/2006y.html#email841114
https://www.garlic.com/~lynn/2006y.html#email841114b
https://www.garlic.com/~lynn/2006y.html#email841120
https://www.garlic.com/~lynn/2006y.html#email851015
https://www.garlic.com/~lynn/2007b.html#email860111
https://www.garlic.com/~lynn/2007b.html#email860113
https://www.garlic.com/~lynn/2007b.html#email860114
https://www.garlic.com/~lynn/2006y.html#email860119
https://www.garlic.com/~lynn/2006y.html#email860121
https://www.garlic.com/~lynn/2007b.html#email860124
https://www.garlic.com/~lynn/2007.html#email860219
https://www.garlic.com/~lynn/2011c.html#email860501
https://www.garlic.com/~lynn/2011c.html#email860609
https://www.garlic.com/~lynn/2011e.html#email870320
other random email about 370 >16mbyte real storage support, where
virtual page above 16M, needed to be below 16M, and would be written
to disk and read back in. I had another solution that would just
involve a few instructions (and little slight of hand)
https://www.garlic.com/~lynn/2006t.html#email800121
Note in the transition from CP67 to VM370, a lot of stuff was dropped (like multiprocessor support, which didn't reappear in customer VM370 Release 4 until late 70s) and/or greatly simplified. CP67 drop from dispatching/run queue was if virtual machine was waiting no outstanding "high-speed" I/O (at most slow-speed terminal I/O). In CP67 this was based on real device type. In VM370, it was changed to the virtual device type. Things were fine as long as the virtual and real types where same/similar. The introduction of VM370 3270 support change that, where various I/O elapsed time was less than disk I/O. The 3270 masking tweak for ACP/TPF, didn't fixed that problem, but further covered it up. One of my suggestions was restore it to the CP67 implementation.
Trivia: the CP67 implementation increased the virtual machine high-speed I/O count, every time a virtual I/O was started on real high-speed device and decremented whenever the I/O finished (only required checking count > 0); the VM370 implementation was scan all virtual devices checking for any high-speed virtual device with active I/O (greatly increasing overhead).
SMP multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dyanmic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
global LRU page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Salary Date: 07 Mar 2022 Blog: Facebook... after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters, including online sales&marketing support HONE systems were long time customers. I was also allowed to continue going to user group meetings (like SHARE) and also visiting customers.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
Director of one of the largest financial datacenters on the east coast like me to stop by and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl system (lonely Amdahl in vast sea of IBM systems). Up until then Amdahl had been selling into the technical/scientific/univ. market, but this would be the first for the "true blue", commercial market. I was asked to spend 6-12months onsite at the customer (to help obfuscate why they ordered an Amdahl system). I talk it over with the customer, and then decline. I was then told that the branch manager was good sailing buddy of IBM CEO, and if I didn't do it, I could forget career, promotions, raises. Now since I was already periodically ridiculing FS ... and being told there wouldn't be any raises ... didn't seem to make a lot difference.
Then early 80s, was at San Jose Research and submitted an "Open Door" that I was vastly under paid, with documentation. I got back a written reply from head of HR ... that said that after detailed examination of my complete career, I was being paid exactly what I was suppose to. I take the original, reply and add a cover ... pointed out that I was being asked to interview coming graduates to work under my technical direction in a new group ... who were being offered starting salary 30% more than I was currently making. I never got a reply, but within a few weeks, I got a 30% raise (putting me on level playing field with what was being offered to the candidates I was interviewing, who would be graduating at end of spring semester). One of many times, co-workers reminded me, in IBM, "Business Ethics is an Oxymoron".
FS trivia: During the late 60s, IBM was being plagued with
plug-compatible controllers, as countermeasure in the early 70s, IBM
starts the "Future System" project to completely replace 370s, so
complex that compatible makers couldn't keep up. During FS, internal
politics were killing off 370 efforts and the lack of new IBM 370
products is credited with giving clone/compatible 370 system makers
their market foothold (i.e. from the law of unintended circumstances,
FS as countermeasure to compatible controllers give rise to
compatible 370 systems). When FS implodes, there is mad rush to get
stuff back into the 370 product pipeline, including kicking off the
quick&dirty 3033&3081 efforts in parallel ... some more detail
http://www.jfsowa.com/computer/memo125.htm
... also, Ferguson & Morris, "Computer Wars: The Post-IBM World", Time
Books, 1993,
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
FS reference:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive.
... snip ...
... note one of the last nails in the FS coffin was analysis by the Houston Science Center, if 370/195 applications were moved to FS machine made out of the fastest available technology, they would have throughput of 370/145 (about 30 times slowdown). Note: IBM Rochester does a vastly simplified FS for the S/38 ... but there was significant technology headroom for the low-end S/38 market.
... aka it wasn't exactly career enhancing to be ridiculing Future System. Down from tech sq (in central sq), there was theater that had been playing cult film for several years ... that I would draw comparison with what was going on in Future System (aka inmates had gotten loose from local mental institution and mistaken for the inhabitants).
future system posts:
https://www.garlic.com/~lynn/submain.html#futuresys
past refs to getting 30% raise
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
Past posts mentioning that in IBM, Business Ethics is an Oxymoron
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#39 IBM Registered Confidential
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021c.html#42 IBM Suggestion Program
https://www.garlic.com/~lynn/2021c.html#41 Teaching IBM Class
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2021.html#83 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2018f.html#96 IBM Career
https://www.garlic.com/~lynn/2017e.html#9 Terminology - Datasets
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
https://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010b.html#38 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
https://www.garlic.com/~lynn/2009.html#53 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2007j.html#72 IBM Unionization
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Oil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US Date: 08 Mar 2022 Blog: FacebookOil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US. Fossil-fuel firms want to turn violence and bloodshed into an oil and gas propaganda-generating scheme. The goal: a drilling bonanza
Biden is approving more oil and gas drilling permits on public lands
than Trump, analysis finds
https://www.washingtonpost.com/politics/2021/12/06/biden-is-approving-more-oil-gas-drilling-permits-public-lands-than-trump-analysis-finds/
U.S. taxpayers are being shortchanged hundreds of millions of dollars
each year by outdated oil and gas policies on federal public lands.
https://www.americanprogress.org/article/federal-oil-and-gas-royalty-and-revenue-reform/
With A Wink And A Nod: How the Oil Industry and the Department of
Interior Are Cheating the American Public and California School
Children
https://www.pogo.org/report/1996/03/with-wink-and-nod-how-oil-industry-and-department-of-interior-are-cheating-american-public-and-california-school-children/
Oil price soars to highest level since 2008 amid Ukraine
conflict. Energy markets have been rocked in recent days over supply
fears triggered by the Russian invasion of Ukraine.
https://www.bbc.co.uk/news/business-60642786
How much oil is consumed in the United States?
https://www.eia.gov/tools/faqs/faq.php?id=33&t=6
In 2020, the United States consumed an average of about 18.19 million
barrels of petroleum per day, or a total of about 6.66 billion barrels
of petroleum. This was the lowest level of annual consumption since
1995. The drop in consumption in 2020 from 2019 was the largest
recorded annual decline in U.S. petroleum demand. The decrease was
largely the result of the global response to the coronavirus
(COVID-19) pandemic.
... snip ...
The US imports crude oil and petroleum from Russia, but it's not a
major source
https://www.wusa9.com/article/news/verify/verify-united-states-imports-crude-oil-petroleum-from-russia-not-major-source/65-ce3e3ff2-7078-4c40-b57f-3db1ca8e62a4
In 2020, the U.S. imported about 27.7 million barrels of crude oil
from Russia, which represented a 1.3% of total crude oil imports.
... snip ...
... per day Russian imports, 27.7M/365 or 75890 barrels/day
... Russian imports as percent of US per day use, 75890/18190000 or about .4% of total US use.
There was an article that US speculators were behind enormous oil (& gas) price spike summer 2008. Then a member of congress releases the speculation transactions that identified the corporations responsible for the enormous oil (& gas) price spike. For some reason, the press then pillared&vilified the member of congress for violating corporation privacy (& exposing the corporations preying on US public, rather than trying to hold the speculators accountable).
(summer 2008) Oil settles at record high above $140
https://money.cnn.com/2008/06/27/markets/oil/
"GRIFTOPIA" had chapter on CFTC commodities market had requirement
that a significant position was required it order to play because
speculators were responsible for wild irrational price changes, making
money off volatility aka a) buy low, sell high, betting price goes up
but purely pump&dump game behind the press driving up the price, then
turn around and short, behind the press driving price down. Then a
select group of speculators were invited to play.
https://en.wikipedia.org/wiki/Griftopia
Also happens in equity/stock market, like a casino where all the games
are rigged ... where the principles bet on the direction of the change
and manipulate things to force the desired change. they do pump&dump
... pushing the market up (buy low, sell high) and then short
... pushing the market down. Old interview that they are all doing
illegal activity (before it got much worse with HFT) ... and have
nothing to worry about from SEC.
http://nypost.com/2007/03/20/cramer-reveals-a-bit-too-much/
With HFT they take order to buy a certain stock/equity, find it at a
lower price, buy it and then sell it to their customer at a higher
price (sometimes just microseconds later).
https://en.wikipedia.org/wiki/High-frequency_trading
'Most Americans Today Believe the Stock Market Is Rigged, and They're
Right'. New research shows insider trading is everywhere. So far, no
one seems to care.
https://www.bloomberg.com/news/features/2021-09-29/is-stock-market-rigged-insider-trading-by-executives-is-pervasive-critics-say
Stock buybacks use to be illegal because it was so easy for executives
to manipulate the market. The Corruption of Capitalism in America
(lots on stock buybacks)
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
The economic mess, started out with the players buying up mortgages, securitizing them, paying the rating agencies for triple-A (when the rating agencies knew they weren't worth triple-A) and selling into bond market at higher price (being able to do over $27T 2001-2008). Then they started creating securitized mortgages designed to fail (pay for triple-A, sell into the bond market), and take out (CDS) gambliing bets they would fail.
As economy was failing and bubble bursting, SECTREAS convinces
congress to appropriate $700B in TARP funds
https://en.wikipedia.org/wiki/Troubled_Asset_Relief_Program
supposedly for buying TBTF "off-book" troubled assets. However, the largest holder of the (CDS) gambling bets was AIG and negotiating to pay off at 50cents on the dollar, when the SECTREAS steps in, had them sign a document that they can't sue those making the gambling bets and take TARP funds in order to pay off at face value. The largest recipient of TARP funds was AIG, and the largest recipient of face-value pay offs was firm formally headed by SECTREAS (firm that was also one of the major players in the CFTC oil/gas price spike).
trivia: TBTF bail-out wasn't TARP (i.e. ye2008, just the four largest
TBTF had over $5.2T in off-book toxic assets, if forced to bring back
on the books, they would have been declared insolvent and forced to be
liquidated, TARP $700B could never have saved them). It was Federal
Reserve that were buying trillions of off-book toxic assets at 98cents
on the dollar and providing tens of trillions in ZIRP funds.
https://en.wikipedia.org/wiki/Zero_interest-rate_policy
The Federal Reserve fought a legal battle to prevent disclosing what they were doing. When they lost, the chairman had press conference and said that they had expected the TBTF to use the funds to help main street, but when they didn't (just pocketing the money), he had no way to force them. However, 1) that didn't stop the flow of ZIRP funds, and 2) the chairman had been partially selected for being a depression era scholar where something similar was tried with the same results.
Griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
financial reporting fraud posts
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud.fraud
Great Deformation & Stock Buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Too Big To Fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
triple-A rated, toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
fed chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
ZIRP funds
https://www.garlic.com/~lynn/submisc.html#zirp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 9020 Date: 08 Mar 2022 Blog: FacebookIBM 9020
FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794
from reviews/comments
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/product-reviews/1456525514?reviewerType=all_reviews
[IBM Brawl 12] Fox, Joe, "The Brawl in IBM 1964", CreateSpace
Independent Publishing Platform, January 26, 2012, ISBN-13: 978-
1456625514, 314 pages
"The Brawl in IBM 1964" tells only half the story of IBM's involvement
with the FAA. This first half is well chronicled by Joe Fox. He was
there, and his words take the reader on a first hand trip through this
saga.
This first half focused on an internal risk adverse IBM bureaucracy
and its struggles to bid and field the FAA's En Route Control System.
The second half of the story, not told in this book, involves the
failure of the same organization, many of the same people, and much of
the same risk adverse culture to complete the FAA's Advanced
Automation System (AAS) modernization program.
... snip ...
The Ugly History of Tool Development at the FAA
https://www.baselinemag.com/project-management/The-Ugly-History-of-Tool-Development-at-the-FAA/
we were doing HA/600 (I fairly early renamed HA/CMP (High Availability Cluster Multi-Processing) when started working on cluster scale-up with national labs and RDBMS vendors) and got brought into some of the IBM FAA AAS reviews. There were some major design mistakes/assumptions that were difficult to recover from. We got to know somebody in Bethesda Rusty Bucket ... who worked 1st shift in his daytime IBM job ... and then wrote ADA, 2nd shift for AAS.
After leaving IBM, met Joe Fox and worked on a project with company he and some other former IBMers founded.
past posts mentiong IBM 1964 Brawl and/or Joe Fox
https://www.garlic.com/~lynn/2021i.html#20 FAA Mainframe
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CDC6000 Date: 09 Mar 2022 Blog: FacebookThornton and Cray had done cdc6600 ... Cray left to form cray research and Thornton left to form network systems (makers of hyperchannel).
CDC 6600
http://academickids.com/encyclopedia/index.php/CDC_6600
Working with Jim Thornton, who was the system architect and the
'hidden genius' behind the 6600, the machine soon took form.
... snip ...
"Considerations In Computer Design - Leading Up To The Control Data
6600" (1963)
https://cseweb.ucsd.edu/classes/sp11/cse240C-a/Papers/thornton_6600_paper.pdf
"DESIGN Of A COMPUTER: The Control Data 6600" (1970)
https://archive.computerhistory.org/resources/text/CDC/cdc.6600.thornton.design_of_a_computer_the_control_data_6600.1970.102630394.pdf
When I was doing HSDT in IBM ... had some amount to do with NSC (and
former CDC people). HSDT started in early 80s with T1 and faster
computer links ... and was working with director of NSF and was
suppose to get $20M to interconnect the NSF supercomputer
centers. HYPERchannel was fairly standard in the supercomputer
centers, so HSDT had some number of their boxes for connectivity. As
I've periodically commented, then congress cuts the budget, some other
things happen, and finally an RFP is released. Old archived post with
copy of Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP. the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs
ahead of the winning bid, RFP awarded 24Nov87). As regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CDC6000 Date: 09 Mar 2022 Blog: Facebookre:
Amdahl was doing ACS/360, it was killed when executives were afraid
that it would advance state of the art too fast and IBM would loose
control of the market ... account of end of acs/360
https://people.cs.clemson.edu/~mark/acs_end.html
includes features that show up more than 20yrs later in es/9000 (note
shortly after acs/360 ends, Amdahl leaves IBM).
FS trivia: During the late 60s, IBM was being plagued with
plug-compatible controllers, as countermeasure in the early 70s, IBM
starts the "Future System" project to completely replace 370s, so
complex that compatible makers couldn't keep up. During FS, internal
politics were killing off 370 efforts and the lack of new IBM 370
products is credited with giving clone/compatible 370 system makers
their market foothold (i.e. from the law of unintended circumstances,
FS as countermeasure to compatible controllers give rise to
compatible 370 systems). When FS implodes, there is mad rush to get
stuff back into the 370 product pipeline, including kicking off the
quick&dirty 3033&3081 efforts in parallel ... some more detail
http://www.jfsowa.com/computer/memo125.htm
... also, Ferguson & Morris, "Computer Wars: The Post-IBM World", Time
Books, 1993,
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
FS reference:
and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive.
... snip ...
... note one of the last nails in the FS coffin was analysis by the Houston Science Center, if 370/195 applications were moved to FS machine made out of the fastest available technology, they would have throughput of 370/145 (about 30 times slowdown). Note: IBM Rochester does a vastly simplified FS for the S/38 ... but there was significant technology headroom for the low-end S/38 market.
... and ridiculing FS wasn't exactly career enhancing (I was told if I wanted promotion and/or raises, I had to transfer to FS instead of continuing to work on 360&370). Down the street from tech sq (in central sq), there was theater that had been playing cult film for several years ... that I would draw comparison with what was going on in Future System (aka inmates had gotten loose and mistaken for the inhabitants).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CDC6000 Date: 09 Mar 2022 Blog: Facebookre:
within a year of 2credit intro to fortran/computers, univ hires me fulltime responsible for os/360. some people from science center came out and installed (virtual machine) CP67 (precursor to vm370) at the univ (3rd after csc itself and mit lincoln labs) and would get to play with it during my weekend dedicated time ... rewrtting whole bunches of the code. original CP67 had support for 1052 & 2741 with automagic terminal type recognition (and used terminal controller SAD CCW to change terminal type scanning for each port). The univ. had some number of ASCII terminals and so I added ASCII support (extending automagic terminal type for ASCII). Trivia: when box arrived for CE to install tty/ascii port scanner in the 360 terminal controller, the box was labeled "Heathkit".
I then wanted to have single dial-in number for all terminals ... hunt
group
https://en.wikipedia.org/wiki/Line_hunting
Didn't quite work since I could switch line scanner for each port (on
IBM telecommunication controller), IBM had took short cut and hard
wired line speed for each port (TTY was different line speed from
2741&1052). Thus was born univ. project to do a clone controller,
built a mainframe channel interface board for Interdata/3 programmed
to emulate mainframe telecommunication controller with the addition it
could also do dynamic line speed determination. Later it was enhanced
with Interdata/4 for the channel interface and cluster of Interdata/3s
for the port interfaces. Interdata (and later Perkin/Elmer) sell it
commercially as IBM clone controller. Four of us at the univ. get
written up responsible for (some part of the) clone controller
business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer
Plug compatible controllers are claimed to have been major motivation for the IBM Future System project in the early 70s (and the internal politics killing 370 efforts is credited with giving clone processor makers their market foothold).
plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Oil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US Date: 09 Mar 2022 Blog: Facebookre:
Stocks soar, with Dow spiking 650 points, as oil prices plummet. The
S&P 500 and the Nasdaq skyrocket 2.6 percent and 3.6 percent,
respectively, as Brent crude slides nearly 12 percent.
https://www.washingtonpost.com/business/2022/03/09/stock-market-today-oil-prices/
... insiders bet on volatility and then pump the news to facilitate the desired volatility, they can even drive up commodities at the sametime driving down equities/stocks ... and then reverse the news to reverse commodities and equities/stocks.
Griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370/158 Integrated Channel Date: 10 Mar 2022 Blog: Facebook370/158 had "integrated channel" ... the 158 engine executed both 370 instruction microcode and integrated channel microcode. The integrated channel microcode was used later for the 303x channel director (i.e. a 3031 was two 158 engines, one with just 370 instruction microcode and 2nd 158 engine that executed integrated channel microcode; a 3032 was 168-3 modified to use the 303x channel director for external channels. a 3033 started out 168-3 logic remapped to 20% faster chips).
A 370/158 offered integrated storage
http://s3data.computerhistory.org/brochures/ibm.370mod158.1972.102646258.pdf
You can also get the optional lntegrated Storage Controls feature that
resides in the processor and provides for integrated attachment of
direct access storage devices. Each of the two controls in the feature
can accommodate up to sixteen 3330 Disk Storage drives, for a total
integrated attachment of 32 drives per Model 158. This provides both a
practical and economical way to attach DASDS.
... snip ...
imploding future system, kicked off quick&dirty projects for 303x
and 3081 in parallel
https://www.garlic.com/~lynn/submain.html#futuresys
Jan1979 I got asked to do a (CDC6600) benchmark on engineering 4341 for national lab that was looking at getting 70 4341s for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). 6600: 35.77secs, 158: 43.90secs, 3031: 36.61secs, 4341: 36.13secs. Note: the 3031 ran benchmark faster than 158 (even tho same engine) because the integrated channel microcode was offloaded to a 2nd 158 engine. The engineering 4341 had a slowed down processor cycle time, production models would be faster (the 4341 also had integrated channel microcode and with a few tweaks, the engineers used them for 3380 3mbyte/sec datastreaming testing). Cluster 4341s (cheaper, faster, smaller footprints, less environmentals, also suited for non-datacenter deployments) so threatened POK 3033, that the head of POK got corporate to cut allocation of critical 4341 manufacturing component in half.
recent posts mentioning 303x channel director was 158 engine w/o 370
instruction microcode, just the integrated channel microcode
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021i.html#85 IBM 3033 channel I/O testing
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021d.html#57 IBM 370
https://www.garlic.com/~lynn/2021d.html#56 IBM 370
https://www.garlic.com/~lynn/2021c.html#71 What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019e.html#146 Water-cooled 360s?
https://www.garlic.com/~lynn/2019d.html#107 IBM HONE
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019.html#76 How many years ago?
https://www.garlic.com/~lynn/2019.html#63 instruction clock speed
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#0 IBM's 3033
https://www.garlic.com/~lynn/2018e.html#91 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018c.html#18 Old word processors
cluster 4341 topic drift: San Jose Research did vm/4341 cluster project using trotter/3088 (i.e. souped up CTCA that allowed interconnecting up to eight processors). The internal implementation did cluster operations in under a second. Then the communication group said that to release it to customers, cluster operations had to use SNA/VTAM ... and cluster operations went from under a second to over half a minute.
my wife had been in the g'burg JES group ... but con'ed to going to POK to be in charge of loosely-coupled architecture where she did Peer-Coupled Shared Data architecture. She didn't remain long because 1) constant battles with communication group trying to force her to use SNA/VTAM for cluster operation and 2) little uptake except for IMS Hot-standby (until much later with sysplex) ... she has story of going out with Vern Watts after work and asking him who he was going to get permission from to do "hot-standby", he said nobody, he would tell them when it was all done.
Peer-Coupled Shared Data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
Note that later he found that IMS maintained updated status with standby machine so fall-over was a minute or two ... however, SNA/VTAM required all sessions had to be re-established ... for large complexes with tens of thousand of terminals it could take over 90mins (VTAM session establishment was heavy weight operation that didn't scale well with elapsed time increasing as number of sessions increased).
I had gotten involved when asked to turn out a baby bell
implementation of NCP/VTAM emulation on Series/1 that had enormously
better throughput and features supported ... including "shadow
sessions" for hot-standby operation. Communication group was infamous
for political dirty tricks and several people thought they had managed
to handle all possibilities ... what the communication group then did
can only be described as truth is stranger than fiction. Part of
presentation I made fall1986 at SNA ARB in Raliegh
https://www.garlic.com/~lynn/99.html#67
part of presentation made at Common User Group meeting by one of the
baby bell people
https://www.garlic.com/~lynn/99.html#70
posts mentioning national lab cdc6600 benchmarks
https://www.garlic.com/~lynn/2022b.html#98 CDC6000
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018d.html#42 Mainframes and Supercomputers, From the Beginning Till Today
https://www.garlic.com/~lynn/2018b.html#49 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2016h.html#51 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#49 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#71 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014c.html#61 I Must Have Been Dreaming (36-bit word needed for ballistics?)
https://www.garlic.com/~lynn/2013c.html#53 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#38 DEC/PDP minicomputers for business in 1968?
https://www.garlic.com/~lynn/2011d.html#40 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011c.html#65 Comparing YOUR Computer with Supercomputers of the Past
https://www.garlic.com/~lynn/2009r.html#37 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009d.html#54 mainframe performance
https://www.garlic.com/~lynn/2006y.html#21 moving on
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2003g.html#68 IBM zSeries in HPC
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: AADS Chip Strawman Date: 11 Mar 2022 Blog: FacebookAADS Chip Strawman
AADS had some pilots, but no real deploments ... I believe it was because it turned out to be disruptive technology that would have upset the status quo for too many stakeholders. I made some presentations where I would say I took a $500 mil-spec chip, aggressive cost reducing to less than a dollar while improving the security.
Lead technical director for the Information Assurance Directorate (for
the agency at Ft. Meade) was running an assurance session in the
trusted computing track at 2001 Intel Developers Forum and asks me to
do a talk on the chip ... gone 404 but lives on at the wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
copy of the overheads are at our website
https://www.garlic.com/~lynn/iasrtalk.zip
NACHA did a pilot ... but using one of their own chips programmed to
emulate our specifications (w/o anonymous payments) also gone 404,
click on 23July2001 news item which will bring up the report
https://web.archive.org/web/20070706004855/http://internetcouncil.nacha.org/News/news.html
It eliminates use of "static data" from breaches to perform fraudulent transactions, as a result evesdropping and breaches were no longer a threat for merchants and point-of-sale, eliminating much of the motivation for crooks to perform breaches. It also eliminated to necessity to encrypt financial transactions over the internet. Both the Information Assurance Directorate and the collection organization really liked that characteristic. However at the time, there was also an EU requirement that point-of-sale (electronic) transactions should be as anonymous as cash transactions ... which the collection organization didn't like. Transactions would just require authentication but didn't require identification (which they periodically tried to conflate).
We had been brought into the X9A10 standards group which had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. At the time, US consumer banks got 40-60% of their bottom line profit from payment fees and the fraud surcharge accounted for up to 90%. The elimination of majority of fraud in retail transactions had possibility of significantly affecting bottom line. Also crooks would have shifted their activities to "identity theft" of the form of opening new financial accounts ... where the responsibility was just on the financial institutions (and no longer could bill merchants for). It would also have required institutions to significantly strengthen their gov. mandated "know you customer" requirements. Warrants could still be used for transaction identity information (based on "know you customer" mandates) ... but shifts liability to financial institutions (no merchants and merchant fees involved).
"YES Card" information and posts
https://www.garlic.com/~lynn/subpubkey.html#x959
identity theft posts
https://www.garlic.com/~lynn/submisc.html#identity.theft
About the same time, I got request from legal representative in dispute with EU financial institution over an ATM cash machine withdrawal that the customer didn't make. Somehow the gov. had been convinced to shift the burden of proof (based on some chipcard technology) from proving the customer made the transaction, to the customer, proving that they didn't make the transaction ... and the financial institution was claiming the CCTV recording of the transaction was no longer available (customer liable for the transaction fraud rather than the institution).
"YES Card" information and posts
https://www.garlic.com/~lynn/subintegrity.html#yescard
patents started in the late 90s and were outgrowth of work on the x9.59 transaction standard in the x9a10 committee, which in the mid-90s had been given the requirement to preserve the integrity of the financial infrastructure for *ALL* retail payments ... and all patents were assigned to First Data Corporation (now part of FISERV). I've been retired for over 15yrs and have little knowledge of the current state of the patents.
AADS patents
https://www.garlic.com/~lynn/aadssummary.htm
At one time FDC, Infineon, and Intel were looking at them for use in providing authentication as to the protection level and integrity of the consumer devices in addition to authentication of the person (possessing the device) doing transactions. Infineon had done pilot chips in their new secure chip fab in Dresden ... and proposal was that Infineon would register each chip's public key at manufacturing time along with the associated security/integrity of the chip. This security/integrity level of the chip could be down graded when any new exploits/vulnerabilities appeared. Transaction infrastructures then could determine if the chip met the minimum risk/security level required for the transaction. Also proposal was to include such chips in transaction devices and have transaction infrastructure determine if both the transaction device (such as point-of-sale terminals) *AND* the consumer device met the minimum risk requirements for the associated transaction.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Why Nixon's Prediction About Putin and Ukraine Matters Date: 11 Mar 2022 Blog: FacebookWhy Nixon's Prediction About Putin and Ukraine Matters
... then there is "Was Harvard responsible for the rise of Putin"
... after the fall of the Soviet Union, those sent over to teach
capitalism were more intent on looting the country (and the Russians
needed a Russian to oppose US looting). John Helmer: Convicted
Fraudster Jonathan Hay, Harvard's Man Who Wrecked Russia, Resurfaces
in Ukraine
http://www.nakedcapitalism.com/2015/02/convicted-fraudster-jonathan-hay-harvards-man-who-wrecked-russia-resurfaces-in-ukraine.html
If you are unfamiliar with this fiasco, which was also the true
proximate cause of Larry Summers' ouster from Harvard, you must read
an extraordinary expose, How Harvard Lost Russia, from Institutional
Investor. I am told copies of this article were stuffed in every
Harvard faculty member's inbox the day Summers got a vote of no
confidence and resigned shortly thereafter.
... snip ...
How Harvard lost Russia; The best and brightest of America's premier
university came to Moscow in the 1990s to teach Russians how to be
capitalists. This is the inside story of how their efforts led to
scandal and disgrace (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20130211131020/http://www.institutionalinvestor.com/Article/1020662/How-Harvard-lost-Russia.html
Mostly, they hurt Russia and its hopes of establishing a lasting
framework for a stable Western-style capitalism, as Summers himself
acknowledged when he testified under oath in the U.S. lawsuit in
Cambridge in 2002. "The project was of enormous value," said Summers,
who by then had been installed as the president of Harvard. "Its
cessation was damaging to Russian economic reform and to the
U.S.-Russian relationship."
... snip ...
trivia: I had gotten asked to help figure out how to do 5,000 banks across Russia (@$1M, $5B total) as part of making it a Democratic country, however before it got very far, the US capitalism looting sabotaged everything.
From CSPAN, talks about extraordinary cooperation between US & Russia
military in the 90s, 25th Anniversary Implementation of Nunn-Lugar Act
https://www.c-span.org/video/?419918-3/implementation-nunn-lugar-act
Nunn-Lugar
https://en.wikipedia.org/wiki/Nunn%E2%80%93Lugar_Cooperative_Threat_Reduction
US style capitalist kleptocracy has long history ... even predating
banana republics
https://www.garlic.com/~lynn/submisc.html#capitalism
US version with "War Is a Racket" and "Economic Hitman"
https://www.amazon.com/New-Confessions-Economic-Hit-Man-ebook/dp/B017MZ8EBM/
wiki entry
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man
also references Butler's "War Is a Racket"
https://en.wikipedia.org/wiki/War_Is_a_Racket
and "perpetual war" (for the military-industrial complex)
https://en.wikipedia.org/wiki/Perpetual_war
military-industrial complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The 1995 SQL Reunion: People, Projects, and Politics Date: 11 Mar 2022 Blog: FacebookThe 1995 SQL Reunion: People, Projects, and Politics
citations refs:
https://www.mcjones.org/System_R/citations.html
... including reference to my archived postings working with Jim Gray
and Vera Watson on System/R
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Active Defense: 2 Date: 11 Mar 2022 Blog: FacebookActive Defense: 2
While it gets into motivation prompted by both civil & military sources, it fails to mention the US Military-Industrial(-Congressional) Complex (MIC) motivations, especially MIC corporations motivation for constantly increasing revenue&profit, even during periods of little or no threats and conflicts.
This century, before the Iraq invasion, the cousin of white house
chief of staff Card ... was dealing with the Iraqis at the UN and was
given evidence of WMDs (tracing back to US in the Iran/Iraq war) had
been decommissioned. the cousin shared it with (cousin, white house
chief of staff) Card and others ... then is locked up in military
hospital, book was published in 2010 (4yrs before decommissioned WMDs
were declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/
NY Times series from 2014, the decommission WMDs (tracing back to US
from Iran/Iraq war), had been found early in the invasion, but the
information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html
Note the military-industrial complex had wanted a war so badly that
corporate reps were telling former eastern block countries that if
they voted for IRAQ2 invasion in the UN, they would get membership in
NATO and (directed appropriation) USAID (can *ONLY* be used for
purchase of the latest US arms, aka additional congressional gifts to
MIC complex not in DOD budget). From the law of unintended
consequences, the invaders were told to bypass ammo dumps looking for
WMDs, when they got around to going back, over a million metric tons
had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/
another reference from today, including motivation for "perpetual war"
(& conflict to sustain their increasing revenue)
https://www.garlic.com/~lynn/2022b.html#104 Why Nixon's Prediction About Putin and Ukraine Matters
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds
more MI(C)C and perpetual war, How the Narcotic of Defense Spending
Undermines a Sensible Grand Strategy
https://slightlyeastofnew.com/2022/02/27/how-the-narcotic-of-defense-spending-undermines-a-sensible-grand-strategy/
How the Narcotic of Defense Spending Undermines a Sensible Grand
Strategy
http://chuckspinney.blogspot.com/2022/02/how-narcotic-of-defense-spending.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 15 Examples of How Different Life Was Before The Internet Date: 12 Mar 2022 Blog: Facebook15 Examples of How Different Life Was Before The Internet. Before the internet, life was very different than today. Although some things have not changed like eating, other aspects of life then are unrecognizable today.
Aug1976, Tymshare
https://en.wikipedia.org/wiki/Tymshare
provides their CMS-based online computer conferencing system "free",
to the (IBM user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE, archives here
http://vm.marist.edu/~vmshare
... lots of their TYMNET
https://en.wikipedia.org/wiki/Tymnet
POPs around the country.
I cut a deal with TYMSHARE to get monthly tape dumps of all VMSHARE files for making available on internal IBM systems and internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). One of my hardest problems were the IBM lawyers who were concerned internal employees would be contaminated by being exposed to customer information (or sometimes finding what executives where telling employees might not be what customers were saying).
One of my visits to TYMSHARE they demo'ed a game called "ADVENTURE" (they had found on Stanford SAIL PDP10 system and ported to VM370/CMS), I got a copy and also made available on the internal network.
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
I was also blamed for online computer conferencing on the internal
network in the late 70s and early 80s. It really took off after I
distributed a trip report about visit to Jim Gray at Tandem in spring
of 1981 (who had left IBM Research the fall before). Only about 300
were active participants, but claims up to 25,000 were reading. from
IBM JARGON:
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
We then print six copies of some 300 pages, executive summary, and
summary of the summary and package in Tandem 3-ring binders and send
to the corporate executive committee (folklore is 5of6 wanted to fire
me, possibly one of the inhibitors was that one of my hobbies after
joining IBM was enhanced production operating systems for internal
datacenters, including the world-wide sales&marketing support "HONE"
systems). From summary of summary:
• The perception of many technical people in IBM is that the company is
rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to
affect revenue, at which point recovery may be impossible
• Many technical people are extremely frustrated with their management
and with the way things are going in IBM. To an increasing extent,
people are reacting to this by leaving IBM Most of the contributors to
the present discussion would prefer to stay with IBM and see the
problems rectified. However, there is increasing skepticism that
correction is possible or likely, given the apparent lack of
commitment by management to take action
• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive environment.
... snip ...
... took another decade (1981-1992) ... IBM had gone into the red and
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company .... reference gone behind paywall but mostly
lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online sales&marketing HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
In the early 80s, I had HSDT project (T1 and faster computer links),
was also working with the NSF director and was suppose to get $20M to
interconnect the NSF supercomputer centers ... giving several
presentations at various supercomputing locations ... then congress
cuts the budget, some other things happen and eventually an RFP is
released (in part based on what we already had running). Copy of
"Preliminary Announce" (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP (possibly
contributing was the earlier online computer conferencing), the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs
ahead of the winning bid, RFP awarded 24Nov87). as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
other trivia:
the first webserver in the US was on the (stanford) SLAC VM370/CMS
system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
... and the mosaic/netscape browser came out of the (OASC sponsored)
NCSA center
https://www.ncsa.illinois.edu/
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
recent posts mentioning Adventure:
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021j.html#82 IBM 370 and Future System
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
other recent posts mentioning VMSHARE:
https://www.garlic.com/~lynn/2022b.html#93 IBM Color 3279
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2022b.html#34 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#98 BITNET XMAS EXEC
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021j.html#77 IBM 370 and Future System
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021j.html#48 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#99 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#47 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021h.html#1 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#45 Cloud computing's destiny
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021c.html#12 Z/VM
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021b.html#81 The Golden Age of computer user groups
https://www.garlic.com/~lynn/2021b.html#69 Fumble Finger Distribution list
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency Date: 13 Mar 2022 Blog: FacebookAttackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
After leaving IBM, we are brought in as consultants to a small client/server startup, two of the former Oracle people (that we had worked with on HA/CMP RDBMS) are there responsible for something called "commerce server" and they want to do payment transactions on the server. The startup had also invented this technology called "SSL" they want to use, the result is now frequently called "electronic commerce". I had absolute authority on everything between the servers and payment networks (doing the transaction differently, required to use the "SSL" libraries, but closed a lot of vulnerabiilties) ... however I could only make recommendations on the browser/server side ... which were almost all violated.
Postel sponsors my talk on "Why Internet Isn't Business Critial Dataprocessing" ... based on the compensating software and procedures I had to do for webservers talking to the payment network gateways.
For having been involved with "electronic commerce", was asked to participate in (financial industry) X9A10 standards work, X9A10 had been given the requirement to preserve the integrity of the financial infrastructure for *ALL* retail payments (not just internet) ... which results in X9.59 standard (uses end-to-end public key but not requiring PKI). The biggest vulnerability was account number was being used for dozens of business processes at millions of locations around the world, at same time, the account number represented a form of authentication. X9.59 eliminated account number from being used as authentication, instead using end-to-end digital signature for authentication. This also eliminated having to use "SSL/TLS" encryption to hide the account number (no longer able to perform fraudulent transaction just knowing the account number, also eliminated majority of threats from breaches). It also worked with the EU directive that electronic payments at point of sale needed to be as anonymous as cash (aka, it was just the account number and end-to-end digital signture for authentication ... but *NO* identification).
The threat of x9.59 to the status quo for the major payment
stakeholders is another story. Their technical people had told me what
they thought was needed (but not how) ... w/o the business people
being involved. They then said that security really needed secure chip
(but was too expensive). So I said I was taking a $500 mil-spec chip,
aggresively cost reducing to less than dollar while improving the
security. I was also on panel here:
http://csrc.nist.gov/nissc/1998/index.html
large main ballroom, standing room only ... long table up on stage,
I'm at one end, the PKI panelists move as far down as possible to the
other end of the table. We are then asked to help wordsmith some
Cal. legislation, "electronic signature", data breach notification",
and "opt-in personal information sharing". Then the transit industry
asked for contactless and transaction done within 100ms requirement
for transit turnstyle (w/o any sacrifice in security ... chip under
$1, more secure than $500 mil-spec, contactless, and transaction under
100ms).
The lead technical director for the Information Assurance Directorate
was doing a panel in the Trusted Computing track at 2001 IDF and asks
me to do a talk ... gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
I was hoping to get EAL5+ or EAL6+ certification ... but crypto was built into the chip ... and then the certification for the crypto is pulled, so I had to fall back to EAL4+ certification.
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
X9.59 posts
https://www.garlic.com/~lynn/subpubkey.html#x959
X9.59 details
https://www.garlic.com/~lynn/x959.html#x959
(AADS) secure chip details/posts
https://www.garlic.com/~lynn/x959.html#aads
data breach notification posts
https://www.garlic.com/~lynn/submisc.html#data.breach.notification.notification
exploit/threat posts
https://www.garlic.com/~lynn/subintegrity.html#threat
posts mentioning nissc, not just PKI
https://www.garlic.com/~lynn/2019d.html#22 Rust in peace: Memory bugs in C and C++ code cause security issues so Microsoft is considering alternatives once again
https://www.garlic.com/~lynn/2018c.html#61 Famous paper on security and source code from the '60s or '70s
https://www.garlic.com/~lynn/2018c.html#56 Famous paper on security and source code from the '60s or '70s
https://www.garlic.com/~lynn/2018c.html#55 Famous paper on security and source code from the '60s or '70s
https://www.garlic.com/~lynn/2017h.html#10 Pentagon Would Ban Contractors That Don't Protect Data
https://www.garlic.com/~lynn/2016e.html#6 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#55 Institutional Memory and Two-factor Authentication
https://www.garlic.com/~lynn/2016b.html#66 Catching Up on the OPM Breach
https://www.garlic.com/~lynn/2016b.html#43 Ransomware
https://www.garlic.com/~lynn/2015f.html#20 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015.html#83 Winslow Wheeler's War
https://www.garlic.com/~lynn/2014m.html#26 Whole Earth
https://www.garlic.com/~lynn/2014l.html#55 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014k.html#42 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014b.html#23 Quixotically on-topic post, still on topic
https://www.garlic.com/~lynn/2012i.html#73 Operating System, what is it?
https://www.garlic.com/~lynn/2011k.html#36 50th anniversary of BASIC, COBOL? (warning: unusually violentthread drift)
https://www.garlic.com/~lynn/2005l.html#42 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2004j.html#7 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2002h.html#30 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2001h.html#0 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#0 FREE X.509 Certificates
https://www.garlic.com/~lynn/98.html#48 X9.59 & AADS
https://www.garlic.com/~lynn/aepay6.htm#gaopki2 GAO: Government faces obstacles in PKI security adoption
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
https://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm20.htm#39 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm2.htm#privacy Identification and Privacy are not Antinomies
https://www.garlic.com/~lynn/aadsm2.htm#integrity Scale (and the SRV record)
https://www.garlic.com/~lynn/aadsm2.htm#arch2 A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm18.htm#31 EMV cards as identity cards
https://www.garlic.com/~lynn/aadsm16.htm#5 DOD prepares for credentialing pilot
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsm11.htm#33 ALARMED ... Only Mostly Dead ... RIP PKI
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency Date: 13 Mar 2022 Blog: Facebookre:
I've been involved in transmission security since the 70s. In the
early 80s had HSDT project with T1 and faster computer links,
including working with NSF director, was suppose to get $20M to
interconnect the NSF supercomputer centers ... then congress cuts the
budget, some other things happen and then entually they release an RFP
(in part based on what we already had running). Copy of "Preliminary
Announce" (28Mar1986):
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... internal IBM politics prevent us from bidding on the RFP (possibly
contributing was being blamed for online computer conferencing on the
internal network in the late 70s and early 80s and initial reaction by
the corporate executive committee was 5of6 wanted to fire me), the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs
ahead of the winning bid, RFP awarded 24Nov87). as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
other trivia: the first webserver in the US was on the (stanford) SLAC
VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
... and the mosaic/netscape browser came out of the (OASC sponsored) NCSA center
https://www.ncsa.illinois.edu/
The internal corporate network was larger than internet/arpanet from just about the beginning and one of the differences what all corporate links had to be encrypted. Standard mainframe links were capped at 56kbit links ... so they had easy time ... but I hated what I had to pay for T1 encryption and faster was hard to find. Finally I became involved in hardware encryption that would handle at least 3mbytes/sec (not mbits) and cost less than $100 to build. The corporate crypto people claimed that it significantly weakend the DES crypto standard. It took me 3months to figure out how to explain to them that instead of significantly weaker, it was significantly stronger than the standard. I was then told there was only one organization that was allowed to have such crypto, I could make as many as I wantedm, but they all had to be sent to location on the east coast. It was when I realized that there were 3 kinds of crypto in the world: 1) the kind they don't care about, 2) the kind you can't do, and 3) the kind you can only do for them.
We then had HA/6000 project that started out for NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000. I then rename it HA/CMP (High Availability Cluster Multi-Processing) when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors. Within a couple weeks of RDBMS cluster scale-up Jan1992 meeting with Oracle CEO, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anyything with more than four processors. We leave IBM a few months later.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 4341 & 3270 Date: 13 Mar 2022 Blog: Facebook3272/3277 direct coax had .086sec hardware response ... early 80s lots of studies showing at least .25sec response improved productivity. Some number of people were touting .25 "system response" ... however, for human to see .25 response on 3272/3277, the system response at to be .164secs or better ... I had bunch of systems that were getting .11sec system response (for workload and hardware configurations that others were getting .25secs).
for 3274/3278, they move a lot of electronics out of the terminal back to the 3274 controller (reducing 3278 manufacturing cost) ... which significantly increased the round-trip protocol chatter, and making elapsed, data stream dependent ... start at .3sec to .5sec or worse, hardware response (no way of delivering quarter second). Letters written to the 3274/3278 product administrator about being much worse for interactive computing came back saying that they weren't intended for interactive computing, but data entry (i.e. electronic keypunch). Later, IBM/PC 3277 (terminal emulation) cards had 3-4 times the upload/download throughput of 3278 cards.
Endicott cons me into doing the analysis and help with the 138/148 microcode assist implementation ... which also shows up in 4331/4341. I'm out in San Jose and rewrite I/O supervisor to make it bullet proof and never fail ... so bldg14(disk engineer) and bldg15(product test) can move from prescheduled, dedicated stand-alone testing ... to any amount of concurrent, on-demand testing ... greatly improving productivity. Bldg15 tended to get very early engineering processors (sometimes 3rd or 4th machine) for disk i/o channel testing. They get very early engineering 3033 and then a very early engineering 4341 ... people in Endicott start complaining that I have more 4341 time than they do. In Jan1979, I get con'ed into doing benchmarks for national lab that was looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
Endicott also con'ed me into running around the world with them giving microcode assist presentations to business planners that would be doing the business cases.
post with initial microcode assist analysis
https://www.garlic.com/~lynn/94.html#21
processor microcode posts
https://www.garlic.com/~lynn/submain.html#mcode
3272/3277 poss referencing .086sec hardware response
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2016e.html#38 How the internet was invented
https://www.garlic.com/~lynn/2015d.html#33 Remember 3277?
https://www.garlic.com/~lynn/2015.html#38 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2012e.html#90 Just for a laugh ... How to spot an old IBMer
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Rise of DOS: How Microsoft Got the IBM PC OS Contract Date: 14 Mar 2022 Blog: FacebookThe Rise of DOS: How Microsoft Got the IBM PC OS Contract
trivia: CMS was precursor to personal computing; before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on cp/67-cms at npg (gone 404,
but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
note: some of the MIT CTSS people went to the 5th flr to Project MAC to do MULTICS (which also spawns UNIX, periodically described as simplified MULTICS). Other of the CTSS people went to IBM science center on the 4th flr and did CP40/CMS (on 360/40 with hardware modifications supporting virtual memory, which morphs into CP67/CMS when 360/67 standard with virtual memory becomes available), lots of online apps, invented GML in 1969 (CTSS RUNOFF had been redone for CMS as script, after inventing GML, GML tag processing is added to script, GML morphs into ISO standard SGML a decade later and after another decade morphs into HTML at CERN), bunch of performance & optimization work.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
other CP67 & VM370 history
https://www.leeandmelindavarian.com/Melinda#VMHist
Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates,
CEO of the then-small software firm Microsoft, to discuss the
possibility of using Microsoft PC-DOS OS for IBM's
about-to-be-released PC. Opel set up the meeting at the request of
Gates' mother, Mary Maxwell Gates. The two had both served on the
National United Way's executive committee.
... snip ...
other trivia: the first webserver in the US was on the (stanford) SLAC
VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
this talks about evolution of SGML into html
http://infomesh.net/html/history/early/
other sgml hsitory
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml
posts from yesterday about the web
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Totally Dodgy Backstory of the Bank that Just Refinanced Trump Tower Date: 14 Mar 2022 Blog: FacebookThe Totally Dodgy Backstory of the Bank that Just Refinanced Trump Tower. How Axos -- a financial firm tied to GOP politics and high-profile lawsuits -- became the Trumps' lender of last resort
poast posts referencing Trump and "bankrupcies, Russian money,
Deutsche bank money"
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021f.html#1 'Madman ... racist, sexist pig': new book details Obama's real thoughts on Trump
https://www.garlic.com/~lynn/2021c.html#81 Russia, not China, tried to influence 2020 election, says US intel community
https://www.garlic.com/~lynn/2021.html#30 Trump and Republican Party Racism
https://www.garlic.com/~lynn/2019e.html#78 Retired Marine Gen. John Allen: 'There is blood on Trump's hands for abandoning our Kurdish allies'
https://www.garlic.com/~lynn/2019b.html#70 Russia Hacked Clinton's Computers Five Hours After Trump's Call
https://www.garlic.com/~lynn/2019b.html#1 Billions From Deutsche Bank Despite Trump's Bankruptcies, Defaults, and Financial Malfeasance
https://www.garlic.com/~lynn/2018e.html#32 12 Russian Agents Indicted in Mueller Investigation
https://www.garlic.com/~lynn/2018b.html#98 The Cambridge Analytica scandal is what Facebook-powered election cheating looks like:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The United States Of America: Victims Of Its Own Disinformation Date: 14 Mar 2022 Blog: FacebookThe United States Of America: Victims Of Its Own Disinformation
... from truth is stranger than fiction and law of unintended
consequences that come back to bite you, much of the radical Islam &
ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:
There was also a calculated decision to use the Saudis as surrogates
in the cold war. The United States actually encouraged Saudi efforts
to spread the extremist Wahhabi form of Islam as a way of stirring up
large Muslim communities in Soviet-controlled countries. (It didn't
hurt that Muslim Soviet Asia contained what were believed to be the
world's largest undeveloped reserves of oil.)
... snip ...
Saudi radical extremist Islam/Wahhabi loosened on the world ... bin
Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam
world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism
internal CIA
https://www.amazon.com/Permanent-Record-Edward-Snowden-ebook/dp/B07STQPGH6/
pg133/loc1916-17:
But al-Qaeda did maintain unusually close ties with our allies the
Saudis, a fact that the Bush White House worked suspiciously hard to
suppress as we went to war with two other countries.
... snip ...
Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:
Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting
the Shah and swearing hostility against the United States. That same
year, the Soviet Union was pouring troops into Afghanistan to prop up
a pro-Russian government that was opposed by Sunni Islamist
fundamentalists and tribal factions. The United States was supporting
Saudi Arabia's involvement in forming a counterweight to Soviet
influence.
... snip ...
MI6, the coup in Iran that changed the Middle East, and the cover-up;
Documentary reveals evidence confirming a British spy's role in
restoring the Shah in 1953 - and how the Observer exposed the plot
https://www.theguardian.com/world/2020/aug/02/mi6-the-coup-in-iran-that-changed-the-middle-east-and-the-cover-up
The World Crisis, Vol. 1, Churchill explains the mess in middle east
started with move from 13.5in to 15in Naval guns (leading to moving
from coal to oil)
https://www.amazon.com/Crisis-1911-1914-Winston-Churchill-Collection-ebook/dp/B07H18FWXR/
loc2012-14:
From the beginning there appeared a ship carrying ten 15-inch guns,
and therefore at least 600 feet long with room inside her for engines
which would drive her 21 knots and capacity to carry armour which on
the armoured belt, the turrets and the conning tower would reach the
thickness unprecedented in the British Service of 13 inches.
loc2087-89:
To build any large additional number of oil-burning ships meant basing
our naval supremacy upon oil. But oil was not found in appreciable
quantities in our islands. If we required it, we must carry it by sea
in peace or war from distant countries.
loc2151-56:
This led to enormous expense and to tremendous opposition on the Naval
Estimates. Yet it was absolutely impossible to turn back. We could
only fight our way forward, and finally we found our way to the
Anglo-Persian Oil agreement and contract, which for an initial
investment of two millions of public money (subsequently increased to
five millions) has not only secured to the Navy a very substantial
proportion of its oil supply, but has led to the acquisition by the
Government of a controlling share in oil properties and interests
which are at present valued at scores of millions sterling, and also
to very considerable economies, which are still continuing, in the
purchase price of Admiralty oil.
... snip ...
When the newly elected democratic government wanted to review the
Anglo-Persian contract, US arranged coup and backed Shah as front
https://unredacted.com/2018/03/19/cia-caught-between-operational-security-and-analytical-quality-in-1953-iran-coup-planning/
https://en.wikipedia.org/wiki/Kermit_Roosevelt,_Jr%2E
https://en.wikipedia.org/wiki/1953_Iranian_coup_d%27%C3%A9tat"
... and Schwarzkoph (senior) training of the secret police to help
keep Shah in power
https://en.wikipedia.org/wiki/SAVAK
Savak Agent Describes How He Tortured Hundreds
https://www.nytimes.com/1979/06/18/archives/savak-agent-describes-how-he-tortured-hundreds-trial-is-in-a-mosque.html
Iran people eventually revolt against the horribly oppressive, (US
backed) autocratic government.
military-industrial complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds
past posts mentioning wahhabi
https://www.garlic.com/~lynn/2022.html#97 9/11 and the Road to War
https://www.garlic.com/~lynn/2021j.html#112 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#90 Afghanistan Proved Eisenhower Correct
https://www.garlic.com/~lynn/2021j.html#57 After 9/11, the U.S. Got Almost Everything Wrong
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021i.html#52 By Letting Saudi Arabia Off the Hook Over 9/11, the US Encouraged Violent Jihadism
https://www.garlic.com/~lynn/2021i.html#50 FBI releases first secret 9/11 file
https://www.garlic.com/~lynn/2021i.html#49 The Counterinsurgency Myth
https://www.garlic.com/~lynn/2021i.html#46 FBI releases first secret 9/11 file
https://www.garlic.com/~lynn/2021i.html#38 The Accumulated Evil of the Whole: That time Bush and Co. made the September 11 Attacks a Pretext for War on Iraq
https://www.garlic.com/~lynn/2021i.html#37 9/11 and the Saudi Connection. Mounting evidence supports allegations that Saudi Arabia helped fund the 9/11 attacks
https://www.garlic.com/~lynn/2021i.html#18 A War's Epitaph. For Two Decades, Americans Told One Lie After Another About What They Were Doing in Afghanistan
https://www.garlic.com/~lynn/2021h.html#62 An Un-American Way of War: Why the United States Fails at Irregular Warfare
https://www.garlic.com/~lynn/2021h.html#57 Generation of Vipers
https://www.garlic.com/~lynn/2021h.html#42 Afghanistan Down the Drain
https://www.garlic.com/~lynn/2021h.html#11 Democratic senators increase pressure to declassify 9/11 documents related to Saudi role in attacks
https://www.garlic.com/~lynn/2021g.html#102 Democratic senators increase pressure to declassify 9/11 documents related to Saudi role in attacks
https://www.garlic.com/~lynn/2021g.html#4 Donald Rumsfeld, The Controversial Architect Of The Iraq War, Has Died
https://www.garlic.com/~lynn/2021f.html#95 Geopolitics, Profit, and Poppies: How the CIA Turned Afghanistan into a Failed Narco-State
https://www.garlic.com/~lynn/2021f.html#71 Inflating China Threat to Balloon Pentagon Budget
https://www.garlic.com/~lynn/2021f.html#65 Biden takes steps to rein in 'forever wars' in Afghanistan and Iraq
https://www.garlic.com/~lynn/2021f.html#59 White House backs bill to end Iraq war military authorization
https://www.garlic.com/~lynn/2021e.html#42 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2021c.html#22 Fighting to Go Home: Operation Desert Storm, 30 Years Later
https://www.garlic.com/~lynn/2020.html#22 The Saudi Connection: Inside the 9/11 Case That Divided the F.B.I
https://www.garlic.com/~lynn/2019e.html#143 "Undeniable Evidence": Explosive Classified Docs Reveal Afghan War Mass Deception
https://www.garlic.com/~lynn/2019e.html#135 Permanent Record
https://www.garlic.com/~lynn/2019e.html#124 'Deep, Dark Conspiracy Theories' Hound Some Civil Servants In Trump Era
https://www.garlic.com/~lynn/2019e.html#113 Post 9/11 wars have cost American taxpayers $6.4 trillion, study finds
https://www.garlic.com/~lynn/2019e.html#105 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#85 Just and Unjust Wars
https://www.garlic.com/~lynn/2019e.html#70 Since 2001 We Have Spent $32 Million Per Hour on War
https://www.garlic.com/~lynn/2019e.html#67 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#58 Homeland Security Dept. Affirms Threat of White Supremacy After Years of Prodding
https://www.garlic.com/~lynn/2019e.html#26 Radical Muslim
https://www.garlic.com/~lynn/2019e.html#25 Radical Muslim
https://www.garlic.com/~lynn/2019e.html#23 Radical Muslim
https://www.garlic.com/~lynn/2019e.html#22 Radical Muslim
https://www.garlic.com/~lynn/2019e.html#18 Before the First Shots Are Fired
https://www.garlic.com/~lynn/2019e.html#15 Before the First Shots Are Fired
https://www.garlic.com/~lynn/2019d.html#99 Trump claims he's the messiah. Maybe he should quit while he's ahead
https://www.garlic.com/~lynn/2019d.html#79 Bretton Woods Institutions: Enforcers, Not Saviours?
https://www.garlic.com/~lynn/2019d.html#77 Magic and Mayhem: The Delusions of American Foreign Policy From Korea to Afghanistan
https://www.garlic.com/~lynn/2019d.html#65 What Happened to Aung San Suu Kyi?
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#47 Declassified CIA Document Reveals Iraq War Had Zero Justification
https://www.garlic.com/~lynn/2019d.html#32 William Barr Supported Pardons In An Earlier D.C. 'Witch Hunt': Iran-Contra
https://www.garlic.com/~lynn/2019d.html#7 You paid taxes. These corporations didn't
https://www.garlic.com/~lynn/2019c.html#65 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019c.html#15 Don't forget how the Soviet Union saved the world from Hitler
https://www.garlic.com/~lynn/2019b.html#56 U.S. Has Spent Six Trillion Dollars on Wars That Killed Half a Million People Since 9/11, Report Says
https://www.garlic.com/~lynn/2019b.html#17 How Iran Won Our Iraq War
https://www.garlic.com/~lynn/2019.html#48 Iran Payments
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#43 Billionaire warlords: Why the future is medieval
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets
https://www.garlic.com/~lynn/2017f.html#73 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#52 [CM] What was your first home computer?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Watch Thy Neighbor Date: 15 Mar 2022 Blog: FacebookWatch Thy Neighbor. To prevent whistleblowing, U.S. intelligence agencies are instructing staff to spy on their colleagues.
Original POGO post from 15Mar2016
https://www.facebook.com/permalink.php?story_fbid=10153366774017882&id=26082912881&_fb_noscript=1
Archived reference from 2019
https://www.garlic.com/~lynn/2019.html#82 The Sublime: Is it the same for IBM and Special Ops?
Note: we were tangentially involved ... but didn't realize until the
success of failure articles.
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/
2002, we get a call asking us if we would respond to an unclassified BAA by IC-ARDA (since renamed IARPA) which was about to close (that basically said that none of the tools they had did the job). We get in the response and have some meetings showing that we could do what was required ... and then nothing. We initially wondered why the agency allowed the BAA to be released (if they weren't going to do anything, one conjecture was anticipating that nobody would respond which would help damp down complaints). Disclaimer: I don't have clearance, although agencies have used my software back to my undergraduate days in the 60s (I didn't find that out until later).
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failuree
whistleblower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM investors staged 2021 revolt over exec pay Date: 15 Mar 2022 Blog: FacebookIBM investors staged 2021 revolt over exec pay. Former Red Hat boss Jim Whitehurst's $22.5m share award got shareholders all steamed up, documents confirm
IBM becomes financial engineering company ... stock buybacks use to be
illegal (because it was too easy for executives to manipulate the
market ... aka banned in wake of '29crash)
https://corpgov.law.harvard.edu/2020/10/23/the-dangers-of-buybacks-mitigating-common-pitfalls/
Buybacks are a fairly new phenomenon and have been gaining in
popularity relative to dividends recently. All but banned in the US
during the 1930s, buybacks were seen as a form of market
manipulation. Buybacks were largely illegal until 1982, when the SEC
adopted Rule 10B-18 (the safe-harbor provision) under the Reagan
administration to combat corporate raiders. This change reintroduced
buybacks in the US, leading to wider adoption around the world over
the next 20 years. Figure 1 (below) shows that the use of buybacks in
non-US companies grew from 14 percent in 1999 to 43 percent in 2018.
... snip ...
Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks
https://web.archive.org/web/20140623003038/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
... snip ...
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts
Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
some recent posts mentioning "Great Deformation"
https://www.garlic.com/~lynn/2022b.html#96 Oil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US
https://www.garlic.com/~lynn/2022b.html#52 IBM History
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2022.html#99 Science Fiction is a Luddite Literature
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#96 'Most Americans Today Believe the Stock Market Is Rigged, and They're Right'
https://www.garlic.com/~lynn/2021k.html#11 General Electric Breaks Up
https://www.garlic.com/~lynn/2021k.html#3 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#101 Who Says Elephants Can't Dance?
https://www.garlic.com/~lynn/2021j.html#47 Economists to Cattle Ranchers: Stop Being So Emotional About the Monopolies Devouring Your Family Businesses
https://www.garlic.com/~lynn/2021i.html#80 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#63 Sears is shutting its last store in Illinois, its home state
https://www.garlic.com/~lynn/2021i.html#32 Counterfeit Capitalism: Why a Monopolized Economy Leads to Inflation and Shortages
https://www.garlic.com/~lynn/2021h.html#18 Whatever Happened to Six Sigma?
https://www.garlic.com/~lynn/2021g.html#20 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021g.html#11 Miami Building Collapse Could Profoundly Change Engineering
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021f.html#55 3380 disk capacity
https://www.garlic.com/~lynn/2021f.html#24 IBM Remains Big Tech's Disaster
https://www.garlic.com/~lynn/2021f.html#6 Financial Engineering
https://www.garlic.com/~lynn/2021f.html#4 Study: Are You Too Nice to be Financially Successful?
https://www.garlic.com/~lynn/2021b.html#7 IBM & Apple
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Modern Capitalism Is Weirder Than You Think It also no longer works as advertised Date: 17 Mar 2022 Blog: FacebookModern Capitalism Is Weirder Than You Think It also no longer works as advertised.
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Is it better to protect banks or people during crisis? Now we have an
answer
https://www.latimes.com/business/story/2022-03-16/covid-fiscal-stimulus-saved-america-from-recession
The economic mess started out with the players buying up mortgages,
securitizing them, paying the rating agencies for triple-A
(when the rating agencies knew they weren't worth triple-A) and
selling into bond market at higher price (being able to do over $27T
2001-2008). Then they started creating securitized mortgages designed
to fail (pay for triple-A, sell into the bond market), and take
out (CDS) gambliing bets they would fail. As economy was failing,
SECTREAS convinces congress to appropriate $700B in TARP funds
https://en.wikipedia.org/wiki/Troubled_Asset_Relief_Program
supposedly for buying TBTF "off-book" troubled assets.
However, the largest holder of the (CDS) gambling bets was AIG and negotiating to pay off at 50cents on the dollar, when the SECTREAS steps in, have them sign a document that they can't sue those making the gambling bets and take TARP funds in order to pay off at face value. The largest recipient of TARP funds was AIG, and the largest recipient of face-value pay offs was firm formally headed by SECTREAS ... a firm that was also one of the major players in the summer 2008 CFTC oil/gas price spike.
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Too Big To Fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
triple-A rated, toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
Griftopia posts (& CFTC allowing speculators to play)
https://www.garlic.com/~lynn/submisc.html#griftopia
TBTF bail-out wasn't TARP (i.e. ye2008, just the four largest TBTF had
over $5.2T in off-book toxic assets, if forced to bring back on the
books, they would have been declared insolvent and forced to be
liquidated, TARP $700B could never have saved them). It was Federal
Reserve that were buying trillions of off-book toxic assets at 98cents
on the dollar and providing tens of trillions in ZIRP funds.
https://en.wikipedia.org/wiki/Zero_interest-rate_policy
The Federal Reserve fought a legal battle to prevent disclosing what they were doing. When they lost, the chairman had press conference and said that they had expected the TBTF to use the funds to help main street, but when they didn't (just pocketing the money), he had no way to force them. However, 1) that didn't stop the ZIRP funds, and 2) the chairman had been partially selected for being a depression era scholar where something similar was tried with the same results.
fed chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
ZIRP funds
https://www.garlic.com/~lynn/submisc.html#zirp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Downfall: The Case Against Boeing Date: 17 Mar 2022 Blog: FacebookDownfall: The Case Against Boeing
2016 Boeing 100th anniv article "The Boeing Century"
https://issuu.com/pnwmarketplace/docs/i20160708144953115
included long article "Scrappy start forged a company built to last", has analysis of the Boeing merger with M/D ("A different Boeing") and the disastrous effects that it had on the company ... and even though many of those people are gone, it still leaves the future of the company in doubt. One was the M/D (military-industrial complex) culture of outsourcing to lots of entities in different jurisdiction as part of catering to political interests ... as opposed to focusing on producing quality products ... which shows up in the effects that it had on 787.
The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than
engineers. And though Boeing was the buyer, McDonnell Douglas
executives some how took power in what analysts started calling a
"reverse takeover." The joke in Seattle was, "McDonnell Douglas bought
Boeing with Boeing's money."
... snip ...
Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the
company's estimable engineering legacy. He had mountains of evidence
to support his position, mostly acquired via Boeing's 1997 acquisition
of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft
plant in Long Beach and a CEO who liked to use what he called the
Hollywood model" for dealing with engineers: Hire them for a few
months when project deadlines are nigh, fire them when you need to
make numbers. In 2000, Boeing's engineers staged a 40-day strike over
the McDonnell deal's fallout; while they won major material
concessions from management, they lost the culture war. They also
inherited a notoriously dysfunctional product line from the
corner-cutting market gurus at McDonnell.
... snip ...
Boeing's travails show what's wrong with modern
capitalism. Deregulation means a company once run by engineers is now
in the thrall of financiers and its stock remains high even as its
planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
disclaimer: within a year of 2credit intro to fortran/computers, univ hires me fulltime responsible for os/360. Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton was possibly largest datacenter, something like couple hundred million in 360s. Lots of politics between director of Renton datacenter and the CFO (who only had a 360/30 up at boeing field for payroll, although they expanded the machine room for a 360/67 for me to play with when I wasn't doing other stuff).
past posts mentioning "The Boeing Century"
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018c.html#26 DoD watchdog: Air Force failed to effectively manage F-22 modernization
https://www.garlic.com/~lynn/2018c.html#21 How China's New Stealth Fighter Could Soon Surpass the US F-22 Raptor
https://www.garlic.com/~lynn/2017k.html#58 Failures and Resiliency
https://www.garlic.com/~lynn/2016e.html#20 The Boeing Century
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Disks Date: 18 Mar 2022 Blog: Facebookre:
litigation resulted in 23jun1969 unbundling announcement ... started (separate) charging for (application) software, SE services, maint. etc (but managed to make the case that kernel software would still be free).
In the early 70s there was the "Future System" effort ... replace all 370s with something completely different (internal politics was killing off 370 projects ... and the lack of new 370s is credited with giving clone 370 makers their market foothold). When FS implodes, there is mad rush to get stuff back into the 370 product pipeline (including kicking off the quick&dirty 3033 & 3081 efforts in parallel)
Clone 370s also motivation to start charging for kernel software ... initially kernel addons ... on transition to charging for all kernel software ... and my dynamic adaptive resource manager (part of what I was deploying for internal datacenters) was initial guinea pig (got to spend time with business planners and lawyers, aka one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters).
Early 80s, transition was complete and the OCO-wars (object code only,
no more source available) began ... can see some discussion in the
VMSHARE archives i.e. in Aug1976, TYMSHARE started offering their
CMS-based online computer conferencing free to the SHARE organization
... archives here
http://vm.marist.edu/~vmshare
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management and scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
online commercial virtual machine service
https://www.garlic.com/~lynn/submain.html#timeshare
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Are Prices "Engines of Chaos"? Date: 18 Mar 2022 Blog: FacebookAre Prices "Engines of Chaos"? In a new book, Rupert Russell lays out a connection between financial speculation, hunger, and war.
There were articles that US speculators were behind enormous oil (& gas) price spike summer 2008. Then a member of congress releases the speculation transactions that identified the corporations responsible for the enormous oil (& gas) price spike. For some reason, the press then pillared&vilified the member of congress for violating corporation privacy (& exposing the corporations that were preying on US public), rather than trying to hold the speculators accountable.
(summer 2008) Oil settles at record high above $140
https://money.cnn.com/2008/06/27/markets/oil/
"GRIFTOPIA" had chapter on CFTC commodities market had requirement
that a significant position was required it order to play because
speculators were responsible for wild irrational price changes, making
money off volatility aka a) buy low, sell high, betting price goes up
but purely pump&dump game behind the press driving up the price, then
turn around and short, behind the press driving price down. Then a
select group of speculators were invited to play.
https://en.wikipedia.org/wiki/Griftopia
Also happens in equity/stock market, like a casino where all the games
are rigged ... where the principles bet on the direction of the change
and manipulate things to force the desired change. they do pump&dump
... pushing the market up (buy low, sell high) and then short
... pushing the market down. Old interview that they are all doing
illegal activity (before it got much worse with HFT) ... and have
nothing to worry about from SEC.
http://nypost.com/2007/03/20/cramer-reveals-a-bit-too-much/
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
and "perpetual war" (for the military-industrial complex)
https://www.garlic.com/~lynn/submisc.html#perpetual.war
also
https://en.wikipedia.org/wiki/Perpetual_war
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SEC takes its finger out of the dike with investigation of Big 4 auditors' conflicts Date: 19 Mar 2022 Blog: FacebookSEC takes its finger out of the dike with investigation of Big 4 auditors' conflicts
After ENRON,
https://en.wikipedia.org/wiki/Enron
rhetoric in congress was that Sarbanes-Oxley (2002)
https://en.wikipedia.org/wiki/Sarbanes%E2%80%93Oxley_Act
would prevent future ENRONs and guarantee executives and auditors did
jail time. However joke was that Washington DC felt badly that one of
the "big five" went out of business and SOX was really a gift to the
audit industry (significantly increasing audit business), but nothing
would change. Note that SOX required SEC do something, but possibly
because even GAO didn't believe SEC was doing anything, it started
doing reports of public company fraudulent financial reporting, even
showing that it increased after SOX went into effect (and nobody
doing jailtime).
https://www.gao.gov/products/gao-06-1079sp
Less well known was that SOX also required SEC do something about the credit rating agencies ... played a major role in the economic mess, selling triple-A ratings for securitized mortgages that they knew weren't worth triple-A (from Oct2008 congessional hearings); significantly contributing to being able to sell over $27T (that is TRILLION) into the bond market, 2001-2008.
ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
fraudulent financial reporting posts
https://www.garlic.com/~lynn/submisc.html#fraudulent.financial.filings
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
triple-A rated toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Lack of Unix in 70s/80s hacker culture? Newsgroups: alt.folklore.computers Date: Sun, 20 Mar 2022 17:25:17 -1000Ahem A Rivet's Shot <steveo@eircom.net> writes:
Internal network technology also used for the corporate sponsored bitnet
https://en.wikipedia.org/wiki/BITNET
also extended to Europe
https://en.wikipedia.org/wiki/European_Academic_Research_Network
NSF funded CSNET ... later merges with BITNET
https://en.wikipedia.org/wiki/CSNET
I had HSDT project for T1 and faster computer links and was working with
NSF director and was suppose to get $20M to internconect the NSF
supercomputer centers. Then congress cuts the budget, some other things
happen and finally an RFP is released (in part based on what we already
had running). Copy of "Preliminary Announce" (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12
"The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program to
foster new supercomputer software and hardware developments; and the
Networking Program to build a National Supercomputer Access Network -
NSFnet".
... snip ...
... internal IBM politics prevent us from bidding on the RFP, the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs ahead
of the winning bid). The RFP called for T1 links ... but they only
putting in 440kbit/sec links ... then to make it look like they were
meeting the RFP, they put in T1 trunks with telco multiplexors running
multiple 440kbit links. As regional networks connect in, it morphs into
the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Lack of Unix in 70s/80s hacker culture? Newsgroups: alt.folklore.computers Date: Mon, 21 Mar 2022 08:52:44 -1000Lynn Wheeler <lynn@garlic.com> writes:
note at time of 1jan1983 cutover to internetworking protocol from
IMP/host protocols there approx. 100 network IMP nodes with 250
connected hosts. old csnet/internet email:
Date: 10/22/82 14:25:57
To: CSNET mailing list
Subject: CSNET PhoneNet connection functional
The IBM San Jose Research Lab is the first IBM site to be registered on
CSNET (node-id is IBM-SJ), and our link to the PhoneNet relay at
University of Delaware has just become operational! For initial testing
of the link, I would like to have traffic from people who normally use
the ARPANET, and who would be understanding about delays, etc. If you
are such a person, please send me your userid (and nodeid if not on
SJRLVM1), and I'll send instructions on how to use the
connection. People outside the department or without prior usage of of
ARPANET may also register at this time if there is a pressing need, such
as being on a conference program committee, etc.
CSNET (Computer Science NETwork) is funded by NSF, and is an attempt to
connect all computer science research institutions in the U.S. It does
not have a physical network of its own, but rather is a set of common
protocols used on top of the ARPANET (Department of Defense), TeleNet
(GTE), and PhoneNet (the regular phone system). The lowest-cost entry is
through PhoneNet, which only requires the addition of a modem to an
existing computer system. PhoneNet offers only message transfer
(off-line, queued, files). TeleNet and ARPANET in allow higher-speed
connections and on-line network capabilities such as remote file lookup
and transfer on-line, and remote login.
... snip ... top of post, old email index
===========================================================================
Date: 30 Dec 1982 14:45:34 EST (Thursday)
From: Nancy Mimno mimno@Bbn-Unix
Subject: Notice of TCP/IP Transition on ARPANET
To: csnet-liaisons at Udel-Relay
Cc: mimno at Bbn-Unix
Via: Bbn-Unix; 30 Dec 82 16:07-EST
Via: Udel-Relay; 30 Dec 82 13:15-PDT
Via: Rand-Relay; 30 Dec 82 16:30-EST
ARPANET Transition 1 January 1983
Possible Service Disruption
---------------------------------
Dear Liaison,
As many of you may be aware, the ARPANET has been going through the
major transition of shifting the host-host level protocol from NCP
(Network Control Protocol/Program) to TCP-IP (Transmission Control
Protocol - Internet Protocol). These two host-host level protocols are
completely different and are incompatible. This transition has been
planned and carried out over the past several years, proceeding from
initial test implementations through parallel operation over the last
year, and culminating in a cutover to TCP-IP only 1 January 1983. DCA
and DARPA have provided substantial support for TCP-IP development
throughout this period and are committed to the cutover date.
The CSNET team has been doing all it can to facilitate its part in this
transition. The change to TCP-IP is complete for all the CSNET host
facilities that use the ARPANET: the CSNET relays at Delaware and Rand,
the CSNET Service Host and Name Server at Wisconsin, the CSNET CIC at
BBN, and the X.25 development system at Purdue. Some of these systems
have been using TCP-IP for quite a while, and therefore we expect few
problems. (Please note that we say "few", not "NO problems"!) Mail
between Phonenet sites should not be affected by the ARPANET
transition. However, mail between Phonenet sites and ARPANET sites
(other than the CSNET facilities noted above) may be disrupted.
The transition requires a major change in each of the more than 250
hosts on the ARPANET; as might be expected, not all hosts will be ready
on 1 January 1983. For CSNET, this means that disruption of mail
communication will likely result between Phonenet users and some ARPANET
users. Mail to/from some ARPANET hosts may be delayed; some host mail
service may be unreliable; some hosts may be completely
unreachable. Furthermore, for some ARPANET hosts this disruption may
last a long time, until their TCP-IP implementations are up and working
smoothly. While we cannot control the actions of ARPANET hosts, please
let us know if we can assist with problems, particularly by clearing up
any confusion. As always, we are or (617)497-2777.
Please pass this information on to your users.
Respectfully yours,
Nancy Mimno
CSNET CIC Liaison
... snip ... top of post, old email index
===========================================================================
Date: 02/02/83 23:49:45
To: CSNET mailing list
Subject: CSNET headers, CSNET status
You may have noticed that since ARPANET switched to TCP/IP and the new
version of software on top of it, message headers have become
ridiculously long. Some of it is because of tracing information that has
been added to facilitate error isolation and "authentication", and some
of it I think is a bug (the relay adds a 'From' and a 'Date' header
although there already are headers with that information in the
message). This usually doesn't bother people on the ARPANET because they
have smart mail reading programs that understand the headers and only
display the relevant ones. I have proposed a mail reader/sender program
that understands about ARPANET headers (RFC822) as a summer project, so
maybe we will sometime enjoy the same priviledge.
The file CSNET STATUS1 on the CSNET disk (see instructions below for how
to access it) contains some clarification of the problems that have been
experienced with the TCP/IP conversion. Here is a summary:
- Nodes that don't yet talk TCP (but the old NCP) can be accessed through
the UDel-Relay. So if you think you have problems reaching a node
because of this, append @Udel-Relay to the ARPANET address.
- You can find out about the status of hosts (e.g., if they run TCP or
not) by sending ANY MESSAGE to Status@UDel-Relay (capitalization is NOT
significant).
- If your messages are undeliverable, you get a notice after two days, and
your messages get returned after 4 days.
- Avoid using any of the fancy address forms allowed by the new header
format (RFC822).
- The TCP transition was a lot more trouble than the ARPANET people had
anticipated.
... snip ... top of post, old email index
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: System Response Date: 22 Mar 2022 Blog: Facebook25 sec or better. turns out 3272/3277 had hardware response of .086secs ... so system response had to be .164secs or better. Response & productivity studies came about same time as 3278, where they moved lots of electronics back to 3274 controller (to reduce 3278 manufacturing costs) which really drove up coax protocol chatter and latency ... so hardware response became somewhat proportional to amount of data in .3sec to .5sec range (or worse). Complaints to 3278 product administrator got response that 3278 wasn't intended for interactive computing, but data entry (electronic keypunch) ... of course TSO users rarely even saw 1sec system response ... so they never noticed different between 3277 and 3278.
There were some internal datacenters giving awards and bragging about their .25sec system response ... so I would needle them that my systems with similar configurations and workload had .11sec system response (nobody was going to give me awards with 5of6 of the corporate executive committee wanting to fire me ... for online computer conferencing on the internal network).
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
1980 STL was bursting at the seams and they were moving 300 people from STL to offsite bldg with dataprocessor service back to STL datacenter. They had tried "remote" 3270 but found the human factors totally unacceptable (compared to resonse in STL). I get con'ed into doing channel-extender support so they can put local channel attach 3270 controllers in the offsite bldg. (with no perceived difference between offsite and in STL). The hardware vendor tries to talk IBM into release my support, but there are some engineers in POK playing with some serial stuff that get it vetoed (afraid if it was in the market, it would make it harder to release their stuff).
In 1988, I get asked by the branch to help LLNL (national lab) standardize some serial stuff that they were playing with, which quickly becomes fibre channel standard (including some stuff I had done in 1980). Initially it is 1gbit/sec full-duplex, 2gbit/sec aggregate (200mbyte/sec). Then the POK engineers finally get their stuff released in 1990 with ES/9000 as ESCON when it is already obsolete (17mbytes/sec).
20+ years ago, I had DSL phone line and trying to check news on 100+ web sites, I finally wrote a minimum web crawler for the 100+ pages, check for changes and then command line load new/unseen news page URLs in the background into browser tabs ... to eliminate latency switching between web pages (i had to get a pretty hefty tricked out PC) ... still using it.
... although now I can send out requests for different websites concurrently ... however, the heuristics for URLs for the same webserver have gotten trickier so they don't think I'm a robot ... command-line to the browser for each URL (for the same webserver) have to be spaced out.
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON (ibm protocol running over fibre channel standard)
https://www.garlic.com/~lynn/submisc.html#ficon
some posts mentioning 3272/3277 hardware .086 response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2016e.html#38 How the internet was invented
https://www.garlic.com/~lynn/2015d.html#33 Remember 3277?
https://www.garlic.com/~lynn/2015.html#38 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2012e.html#90 Just for a laugh ... How to spot an old IBMer
some posts mentioning browser tabs
https://www.garlic.com/~lynn/2013o.html#25 GUI vs 3270 Re: MVS Quick Reference, was: LookAT
https://www.garlic.com/~lynn/2012n.html#39 PDP-10 and Vax, was System/360--50 years--the future?
https://www.garlic.com/~lynn/2010d.html#22 OT: PC clock failure--CMOS battery?
https://www.garlic.com/~lynn/2010b.html#48 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009r.html#36 SSL certificates and keys
https://www.garlic.com/~lynn/2007m.html#8 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2005n.html#41 Moz 1.8 performance dramatically improved
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: System Response Date: 22 Mar 2022 Blog: Facebookre:
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (including world-wide online sales&marketing support HONE systems). After transferring from the science center to san jose research in the 70s, I got to wander around both IBM and non-IBM datacenters in silicon valley including US HONE datacenter (all US HONE datacenters had been consolidated up in Palo Alto), STL (now SVL), bldg14 (disk engineering) and bldg15 (product test).
bldg14 was doing prescheduled, 7x24, stand-alone mainframe testing. They said they had tried MVS, but it had 15min mean-time-between-failure (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor, making it bullet proof and never fail, allowing any amount of concurrent, on-demand testing, greatly improving productivity. I then wrote a (internal only) research report about the never fail work and happened to mention the MVS 15min MTBF ... which brings down the wrath of the MVS group on my head. When they initially called, I thot they wanted help with all the RAS work, but apparently they wanted to have me separated from the IBM company.
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
and from IBM Jargon:
https://comlay.net/ibmjarg.pdf
bad response - n. A delay in the response time to a trivial request of
a computer that is longer than two tenths of one second. In the 1970s,
IBM 3277 display terminals attached to quite small System/360 machines
could service up to 19 interruptions every second from a user I
measured it myself. Today, this kind of response time is considered
impossible or unachievable, even though work by Doherty, Thadhani, and
others has shown that human productivity and satisfaction are almost
linearly inversely proportional to computer response time. It is hoped
(but not expected) that the definition of Bad Response will drop below
one tenth of a second by 1990.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Google Cloud Date: 22 Mar 2022 Blog: FacebookMainframe & Application Modernization Manager, Cloud Delivery Center
In 80s, co-worker at IBM San Jose Research has left and was doing lots of contracting work in silicon valley. He had redone a lot of mainframe C, significantly improving instruction optimization for mainframe and ported the Berkeley chip tools to the mainframe. One day the local IBM rep stopped by and asked him what he was doing ... and he said mainframe->SGI ethernet support, so they can use SGI graphical workstations as front-ends to the mainframe. The IBM rep then told him he should do token-ring instead or customer might find that their mainframe support wasn't as timely as in the past. I then get a phone call and have to listen to an hour of four letter words. The next morning, the senior engineering VP of the (large VLSI chip) company calls a press conference and says they are moving everything off the IBM mainframe to SUN servers. IBM then have a bunch of task forces to decide why silicon valley wasn't using IBM mainframes ... but the IBM task forces weren't allowed to evaluate some of the real reasons
In 1988, I get asked by the branch to help LLNL (national lab) standardize some serial stuff that they were playing with, which quickly becomes fibre channel standard (including some stuff I had done in 1980). Initially it is 1gbit/sec full-duplex, 2gbit/sec aggregate (200mbyte/sec). Then the POK engineers finally get their stuff released in 1990 with ES/9000 as ESCON when it is already obsolete (17mbytes/sec).
Later some POK engineers become involved with fibre channel standard and define a protocol that radically reduces the throughput, eventually released as FICON. Latest benchmark I can find is "PEAK I/O" for max-configured z196 that gets 2M IOPS using 104 FICON (running over 104 FCS). About the same time there was a FCS announced for e5-2600 blade claiming over a million IOPS (two such FCS getting higher throughput than 104 FICON running over 104 FCS).
Note the (z196 era) e5-2600 blade benchmarked at ten times max-configured z196 (benchmark is number of iterations compared to 370/158-3 assumed to be 1MIPs machine) .. and these blades have maintained the ten times processing of mainframe z-machines. A large cloud operation will have a dozen or more megadatacenters around the world, each with half million or more of these blades (each megadatacenter 5-10 million times processing of max configured mainframe and enormous automation with staffs of only 80-120 people at each center).
Note the (z196 era) e5-2600 blade benchmarked at ten times max-configured z196 (benchmark is number of iterations compared to 370/158-3 assumed to be 1MIPs machine) .. and these blades have maintained the ten times processing of mainframe z-machines. A large cloud operation will have a dozen or more megadatacenters around the world, each with half million or more of these blades (each megadatacenter 5-10 million times processing of max configured mainframe and enormous automation with staffs of only 80-120 people at each center).
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Google Cloud Date: 22 Mar 2022 Blog: Facebookre:
In the 60s, there were two spinoffs of the IBM cambridge scientific
center, creating commercial online (virtual machine) cp67/cms service
bureaus. There was other (virtual machine) service bureaus formed
later ... like TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
In aug1976, TYMSHARE offers their (VM370/)CMS-based computer
conferencing system free to IBM user group, SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here:
http://vm.marist.edu/~vmshare
also part of TYMSHARE
https://encyclopedia2.thefreedictionary.com/Tymshare
https://en.wikipedia.org/wiki/Tymnet
I had cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for deploying on internal systems and networks.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commercial virtual machine online services
https://www.garlic.com/~lynn/submain.html#timeshare
Other drift, within a year of taking two credit intro to computers/fortran, I was hired fulltime to by the univ. to be responsible for production operating systems. Then before I graduate, I'm hired into a small group in the Boeing CFO office to help with the creation of Boeing Computer Services, consolidate all datarocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities.
I thought (just the) Renton datacenter was possibly largest in the world, something like a couple hundred million in 360 systems (360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room). Lots of politics between Renton manager and CFO, who only had a 360/30 up at Boeing field for payroll (although it was enlarged for a 360/67 that I could play with when I wasn't doing other stuff).
Also in the early 80s, I started HSDT project (T1 and faster computer links), was working with director of NSF and was suppose to get $20M to interconnect the NSF supercomputer centers. Was also doing some FEC work with Cyclotomics up at Berkeley (founded by Berlekamp, later bought by Kodak) on 15/16s Reed-Solomon forward error correcting. Trivia, Cyclotomics also provided the encoding standard for CDROMs.
Then congress cuts the budget, some other things happen and eventually
an RFP is released (in part based on what we already had
running). Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
"The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet"
... snip ...
... internal IBM politics prevent us from bidding on the RFP. the NSF
director tries to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just makes the internal politics worse
(as did claims that what we already had running was at least 5yrs
ahead of the winning bid, RFP awarded 24Nov87). The winning bid doesn't even install T1
links called for ... they are 440kbit/sec links ... but apparently to
make it look like its meeting the requirements, they install telco
multiplexors with T1 trunks (running multiple links/trunk). We
periodically ridicule them that why don't they call it a T5 network
(because some of those T1 trunks would in turn be multiplexed over T3
or even T5 trunks). as regional networks connect in, it becomes the
NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, next, index - home