From: Lynn Wheeler lynn@garlic.com Subject: Will The Cloud Take Down The Mainframe? Date: 26 Jan 2021 Blog: FacebookWill The Cloud Take Down The Mainframe?
1980, STL was stuffed to the gills and they were planning on moving 300 people from the IMS group to offsite bldg with dataproceessing support back to STL datacenter. They had tried "remote 3270" (through 3704/3705) and found the human factors totally unacceptable. I get con'ed into doing channel-extender support, channel attach 3270 controllers at the offsite bldg and not be able to tell the difference in the remote&local human factors. The hardware vendor wants IBM to let them release my support ... but there is group in POK that is playing with some serial stuff and get it vetoed, they were afraid that if it was in the market, it would make it harder for them to get their stuff released.
In 1988, I'm asked to help LLNL (national lab) get some serial stuff they are playing with, standardized ... which quickly becomes fibre channel standard (including some stuff I had done in 1980). The pok people finally get their stuff released in 1990 with ES/9000 as ESCON, when it is already obsolete. Later some POK engineers become involved in fibre channel standard and define a heavyweight protocol that drastically reduces the throughput of native FCS ... which eventually is released as FICON.
Latest published benchmarks FICON numbers I can find is "peak I/O" z196 numbers that used 104 FICON (running over 104 FCS) that managed to reach peak I/O of 2M IOPS. Note that about the same time as z196 "peak I/O", a FCS was announced for e5-2600 blades that claimed over million IOPS (two such FCS have higher thoughput than 104 FICON running over 104 FCS).
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
Other trivia, 1985 I'm asked if I would turn out some stuff one of the
baby bells had done emulating VTAM&NCP on series1, as type1 product
(quickly upgrading to RIOS/RS6000). It was well known the
communication group was notorious for corporate dirty tricks ... so
much of the time was strategy to prevent raleigh from blocking the
effort. It emulated cross-domain with ownership of all resources out
in the Series/1 infrastructure with no single point of failure
... allowing much large configurations than VTAM single domain. I took
a baby bell configuration of greater than 64K 3270s and used the HONE
3275 configuration to size the equivalent 3275/NCP configuration,
producing a presentation that I gave an SNA ARB in
raleigh. Communication group kept complaining that the comparison was
invalid ... but were never able to explain how it was invalid. What
communication group then did to torpedo the effort can only be
describe as truth is stranger than fiction. part of 1986 presentation
that baby bell gave at COMMON user group
https://www.garlic.com/~lynn/99.html#70
part of presentation I did for fall 1986 SNA ARB in raleigh
https://www.garlic.com/~lynn/99.html#69
One of the things I realized working with Boca Series/1 people was that SNA was triple oxymoron, not a System, not a Network, and not an Architecture ... they complained that it wasn't possible to just build stuff from Raleigh documents that interfaced to SNA ... everything required careful tracing and reverse engineering to get it to work.
other communication group trivia: Late 80s, a senior disk engineer gets a talk scheduled at annual, world-wide, communication group, internal IBM conference, supposedly on 3174 performance, but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had stranglehold on datacenters with strategic ownership of everything that crossed datacenter walls and were fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm. The disk division was seeing customers moving data to more distributed computing platforms with a drop in disk sales. The disk division had come up with several solutions, but they were constantly being vetoed by the communication group. The communication group datacenter stranglehold wasn't just killing disk sales but affecting the whole mainframe business and a few short years later, IBM has gone into the red and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
killer micro trivia: late 90s I86 was moving to risc chips with
hardware layer that translated i86 instructions into risc micro-ops
... largely negating difference with between i86 and risc processor
throughput ... with their much faster development cycles they were
leaving mainframes in the dust.
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
by the time of the z196 era, e5-2600 server blade (mentioned above)
had 500BIPS rating, ten times max configured z196 (industry standard
benchmark, number of iterations compared to 370/158-3 assumed to be
one MIP processor). At the time, max configured z196 had price tag of
$30M ($600,000/BIPS) while IBM list price for e5-2600 blade was $1815
($3/BIPS). However for a decade, the large cloud operations were
claiming that they assembled their own server blades at 1/3rd the cost
of brand name blades ... aka $1/BIPS.
About that time, the server chip makers press was that they were shipping half their product directly to cloud operations ... and shortly after that IBM sold off its server business. Note that large cloud operators each have dozens of megadatacenters around the world, each megadatacenter operated with staff of 80-120 people and containing over half million server blades ... each server blade now 10-50 times the processor performance of a max configured mainframe.
more communication group trivia: In the mid-80s, communication was fiercely fighting the announce of mainframe tcp/ip support ... but lost that battle ... and then switched to since they had strategic ownership of everything that crossed the datacenter walls, the mainframe tcp/ip product had to be done through them. What eventually shipped got 44kbytes/sec aggregate throughput using 3090 processor. I then did the RFC 1044 enhancements and in some tuning tests at Cray Research between a 4341 and a CRAY got sustained 4341 channel throughput using only a modest amount of 4341 processor (around 500 times improvement in bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
random observation: my wife did short stint as chief architect for Amadeus (european airline res system built off old Eastern System One) ... however she supported Europe for x.25 and the communication group quickly got her replaced. It didn't do them much good since Europe went with x.25 anyway ... and the communication group replacement was quickly replaced.
Amadeus posts:
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs
https://www.garlic.com/~lynn/2007h.html#12 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2008c.html#53 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008p.html#41 Automation is still not accepted to streamline the business processes... why organizations are not accepting newer technologies?
https://www.garlic.com/~lynn/2009j.html#33 IBM touts encryption innovation
https://www.garlic.com/~lynn/2009l.html#55 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2009r.html#59 "Portable" data centers
https://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2011.html#41 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
https://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System
https://www.garlic.com/~lynn/2011d.html#74 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011i.html#77 program coding pads
https://www.garlic.com/~lynn/2012c.html#8 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2012c.html#9 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2012h.html#52 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012j.html#5 Interesting News Article
https://www.garlic.com/~lynn/2012o.html#13 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2014c.html#69 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2015g.html#72 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2017i.html#45 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017k.html#60 SABRE after the 7090
https://www.garlic.com/~lynn/2017k.html#67 SABRE after the 7090
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: Will The Cloud Take Down The Mainframe? Date: 26 Jan 2021 Blog: Facebookre:
the last product we did at IBM was HA/CMP. It originally started out
as HA/6000 for NYTimes to more their newspaper system (ATEX) off
vax/cluster to ibm. Then when I was working with national labs on
technical/scientific cluster scale-up and RDBMS vendors on commercial
cluster scale-up, I renamed it HA/CMP (High Availability Cluster Multi-Processing). The RDBMS vendors had
vax/cluster and unix support in the same source base ... and to ease
the port, I did an API implementation that emulated VAX/Cluster but
with scale-up&performance enhancements. This old post about early
jan1992 meeting in Ellison's conference room (Oracle CEO) on cluster
scale-up (16-way by mid92 and 128way by ye92)
https://www.garlic.com/~lynn/95.html#13
then within a few weeks, cluster scale-up is transferred, announced as
IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work with anything that had more than four processors. A few months later, we depart IBM. Note contributing may have been mainframe DB2 group complaining that if we were allowed to go ahead, it would be years ahead of them. I had also been asked to contribute to IBM corporate continuous available strategy document. However it got pulled when both Rochester (AS/400) and POK (mainframe) complained that they couldn't meet those requirements.
17Feb1992 press, ibm supercomputer for scientific/technical *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
11May1992 press, national lab interest in cluster supercomputing
caught IBM by "surprise" (even tho I had been working with them on it
for over a decade)
https://www.garlic.com/~lynn/2001n.html#6000clusters2
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
trivia: Jan1979, I was con'ed into doing benchmarks for national lab on engineering 4341 that was interesting in getting 70 for (cluster) compute farm.
other triva: For a time my HSDT project (t1/1.5mbit/sec and faster
computer links) in the mid-80s had T1 link into clementi's E&S lab in
kingston, he had a 3090, but the real compute power was in floating
point systems box (which had 40mbyte/sec disk arrays to keep the
processor fed) ... and he had twenty such boxes. HSDT had been working
with the director of NSF and was suppose to get $20M to interconnect
the NSF supercomputer centers. Then congress cuts the budget, some
other things happen and eventually an RFP is released (in part based
on what we already had running). old post with 28Mar1986 preliminary
release.
https://www.garlic.com/~lynn/2002k.html#12
Internal politics prevent us from bidding and the NSF director tries
to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other agencies, but that just makes the internal politics
worse (as does comment that what we already had running was at least
5yrs ahead of all RFP responses). As regional networks connect into
the centers, it grows into the NSFNET backbone (precursor to modern
internet)
https://www.technologyreview.com/s/401444/grid-computing/
NSF interconnect posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: Will The Cloud Take Down The Mainframe? Date: 26 Jan 2021 Blog: Facebookre:
trivia: after transferring to san jose research, I got to wander around most IBM and customer datacenters in silicon valley. One of the places was disk engineering and product test across the street (bldgs 14&15). At the time there were doing 7x24, prescheduled, stand-alone mainframe testing. They mentioned that they had recently tried MVS ... but it had 15min mean-time-between-failure (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor to make it bullet proof and never fail ... allowing any amount of concurrent, on-demand testing, greatly improving productivity.
bldg15 gets the first 3033 engineering box outside pok (#3?) for disk channel testing. Since i/o testing only takes a percent or two of processor ... we find a spare string of 3330s and setup our own private online service on the 3033.
One of the early issues was the external 303x channel boxes (a 158 engine with just the integrated channel microcode and w/o the 370 microcode) still had some number of glitches and would require manually recycling the box. We find that if you quickly hit all the box's six channel addresses with CLRCH instruction, the box would reset itself and reimpl ... w/o requiring manual intervention (i.e. for 16 channels, a 3033 needed three of these boxes).
Later I did internal writeup and everything to handle engineering testing and happen to mention the MVS 15min MTBF ... which apparently greatly embarrassed MVS executives ... I was told informally that they tried to have me separated from the IBM company ... and when that didn't work, they tried other harassing/bullying activities.
posts getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: Will The Cloud Take Down The Mainframe? Date: 26 Jan 2021 Blog: Facebookre:
other trivia: back in the sixties ... when IBM rented computers and charges was based on the system meter ... whenever ran when ever any processor and/or channel was busy ... and even internal IBM would charge departments to recover datacenter costs (even tho it was purely funny money) ... the science center wanted to move to leaving the CP67 available up 7x24 ... but wanted to minimize costs ... especially in offshift periods when the system might just sitting idle waiting for users to dial in (aka cloud on-demand). There was lots of operational automation to allow dark room operation offshift w/o operator. Next was custom channel programs that would allow the channel to go to sleep (and let the system meter stop) ... but immediately wake up when any bits were arriving. This was also used by the 60s cp67 commercial online service bureau spinoffs of the science center. Note that everything (all processors and channels) had to be idle for at least 400ms for the system meter to stop. MVS trivia: long after IBM had switched to selling machines in the 70s ... MVS still had a timer task that would wake up every 400ms (guarantee that any system meter never came to a stop).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online service bureau posts
https://www.garlic.com/~lynn/submain.html#timeshare
Most of IBM viewed computers as profit items ... while major cloud operators view them as cost items. They had so aggressively reduced server system costs that people, power and cooling were increasingly becoming the major cost items ... and so have also been very aggressively working on automation as well as minimizing computer power&cooling requirements ... components suspending operation when idle but can immediately become available ondemand. Major cloud operations have dozens of megadatacenters around the world, each megadatacenter staffed with 80-120 people, having over half million blade servers and each blade server 10-50 times the performance of max. configured IBM mainframe.
several cloud issues .... they needed to reduce computer system costs by several orders of magnitude ... and then they needed full (free, non-proprietary) source to adapt the software for automation and massive cluster operation. After the turn of the century, they were claiming they were assembling their own servers for 1/3rd the cost of brand name servers. Then after server chip maker press saying that they were shipping at least half their products directly to major cloud makers (about that time, IBM sells off its server business), started seeing press that major cloud vendors were applying enormous pressure to the chip makers to improve computing power&cooling efficiencies ... as well as design for ondemand operation, dropping power/cooling to zero when idle but can be instantly available ondemand.
last decade financial numbers had IBM processor hardware sales dropping to equivalent of 50-60 max. configured systems per year. However, mainframe group was 25% of revenue and 40% of profit ... nearly all software & services (newer hardware models needed to continually keep the software and services revenue flowing)
megadatacenter postings
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: Killer Micros Date: 27 Jan 2021 Blog: Facebookre:
early 90s, articles that killer micros would be the death of the mainfrmae, much of the low hanging fruit moved off mainframe ... and IBM goes into the red ... was being reorganized into 13 "baby blues" in preparation for breaking up the company ... what was left was few high ticket items like financial. Mid/late 90s, financial industry/wallstreet was spending billions of dollars to rewrite software to move off mainframe. Lots of 60s mainframe financial software that recorded transactions during the day for settlement in the overnight batch window. Even real-time, online transactions software from the 70s&80s were still being queued for settlement in the overnight batch window. During the 90s, the overnight batch window was severely being stressed by increase in workloads and globalization cutting the duration of the window.
The billions of dollars was to redo financial transaction for parallel straight-through process (each transaction settled as it occurred) on loads of killer micros. However, they were using industry standard parallel libraries that had hundred times the overhead of mainframe cobol batch. Several of us raised warnings ... but they were ignored until the pilot projects started going up in spectacular flames ... the parallelization overhead totally swamping the increase in throughput planned from having loads of killer micros.
A couple things 1) late nineties micros moved to risc processors with hardware layer that translated i86 instructions into risc micro-ops ... and with their much faster development cycles started leaving leaving mainframes in the dust (by z196, max config with 80 processors rated at 50BIPS while a commodity server e5-2600 blade had ratings of 500BIPS ... using industry standard benchmark, number of iterations compared to no. of iterations on 370/158-3 assumed to be one MIPS, aka tens times max. configured z196), 2) middle decade after turn of century I started working with somebody that had developed a financial business language that generated fine grain, easily parallelized SQL statements ... and did several high-end examples of major financial transaction systems. This took advantage of the a) significant cluster parallelization throughput improvements done by most RDBMS vendors (including IBM) and b) significant performance increase in I86 processors. Was easily able to demonstrate a six system RDBMS cluster (with each system ten times the performance of max configured mainframe) easily handle the largest financial transaction load with lots of spare capacity. This showed major acceptance at financial industry meetings ... and then we hit a brick wall. They eventually said that there were executives that bore the scars from the straight-through disasters in the 90s and it was going to be a long time before it was tried again.
Trivia: turn of century I was brought into a financial industry outsourcing datacenter that handled half of all credit card accounts in the country (transactions, statementing, call centers, plastic cards, etc). They had more than 40 max configured mainframe systems (@$30M, around $1.5B, none older than 18m, constant rolling upgrades), each running 450K cobol statement application (accounts partitioned across the systems, couldn't afford overhead of parallel sysplex), number needed to complete settlement in the overnight batch window. They had 80 person performance group that had been handling the performance care & feeding for decades ... but I was to see what else I could find. I eventually found 14% improvement (>$200M in max configured mainframes). Part of the issue is that they had gotten micro-focused with the same tools they had been using for years. At the IBM cambridge science center in the 70s, we did a lot of performance technology work (including much of the early capacity planning stuff) using several different technologies. Since they had been so focused at the microlevel ... I used some tools to look at the macrolevel to see if there were things that they had been overlooking. other trivia: in the plastic card room with a sea of embossing machines, they had banner that they had done 500M plastic card embossing&mailings that year.
other trivia: I had done worked with Jim Gray on the original sql/relational system, "System R" ... and then when he left for Tandem in the early 80s, he tried to palm off bunch of stuff on me ... working with early "System R" customers (Bank of America, 60 distributed systems in branch offices), tech transfer to Endicott for SQL/DS ("under the radar" while mainstream IBM was preoccupied with the "official" next generation DBMS, "EAGLE" ... at least until EAGLE implodes and there is request for how fast could a port be done to MVS ... which is eventually released as DB2, originally for decision support only), consulting with the IMS DBMS group in STL, etc.
also ... the massive credit card processing datacenter with >40 max configured mainframes (@$30M) that were needed to finish cobol batch settlement during the overnight batch window. It had something of a cloud ondemand issue but on totally different timescale ... current cloud ondemand can be measured in subsecond or few second periods (with at least order of magnitude difference between nominal and peak load). There is nearly an order of magnitude difference in avg number of credit card transaction per day between the summer months and peak holiday buying season. The >40 max configured mainframes were needed for peak holiday seasons ... but need drastically dropped off during summer months. The earnings on credit card transactions are sufficient to keep peak season mainframe processing capacity year around. By comparison lots of cloud had to reduce the system processing costs by nearly six orders of magnitude ($$$/BIPS, compared to IBM mainframe) ... to the point that people, power, cooling, etc, other costs have become increasingly dominant.
System R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts mentioning RDBMS commercial cluster work
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
posts mentioning 450K cobol statement application:
https://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?
https://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018f.html#13 IBM today
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
overnight batch window &/or straight-through processing
https://www.garlic.com/~lynn/aadsm19.htm#46 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#20 ID "theft" -- so what?
https://www.garlic.com/~lynn/aadsm28.htm#14 Break the rules of governance and lose 4.9 billion
https://www.garlic.com/~lynn/aadsm28.htm#35 H2.1 Protocols Divide Naturally Into Two Parts
https://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2015h.html#2 More "ageing mainframe" (bad) press
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2016.html#25 1976 vs. 2016?
https://www.garlic.com/~lynn/2016b.html#48 Windows 10 forceful update?
https://www.garlic.com/~lynn/2016d.html#84 The mainframe is dead. Long live the mainframe!
https://www.garlic.com/~lynn/2016g.html#23 How to Fix IBM
https://www.garlic.com/~lynn/2016h.html#72 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2017.html#82 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#58 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#63 The ICL 2900
https://www.garlic.com/~lynn/2017d.html#39 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017f.html#11 The Mainframe vs. the Server Farm: A Comparison
https://www.garlic.com/~lynn/2017g.html#17 Wall Street
https://www.garlic.com/~lynn/2017h.html#32 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2017j.html#3 Somewhat Interesting Mainframe Article
https://www.garlic.com/~lynn/2017j.html#37 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018f.html#85 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: Availability Date: 27 Jan 2021 Blog: Facebookre:
Before leaving IBM, out marketing our HA/CMP product, I coined disaster survivability and geographic survivability) (to differentiate from disaster recovery) and was asked to write a section for IBM continuous availability strategy document (although the section got pulled when both rocehester/as400 and POK/mainframe complained they couldn't meet the requirements ... aka five-nines across flooding, power outages, earthquakes, etc)
I think the cloud service contracts can call for no backup, backup within single datacenter, backup across datacenters, load-balancing across datacenters and/or processing routed to closest datacenter, as well as how much guaranteed elastic ondemand provided.
A few years ago there were also articles about people using a credit card to (automagically) spinup ondemand (cluster) supercomputers (that would rank in top 40 in the world) for a few hours from cloud datacenters. They could also get reduced rate if they could preschedule during selected periods.
I had worked with Jim Gray on original sql/relational System R, he
left IBM in the early 80s for Tandem (trying to palm off a bunch of
stuff on me). We would then periodically visit him at Tandem. This is
from 1984 availability study he did
https://www.garlic.com/~lynn/grayft84.pdf
also
https://jimgray.azurewebsites.net/papers/TandemTR86.2_FaultToleranceInTandemComputerSystems.pdf
by 1990 we were looking at geographic survivability (floods, power outages, telco outages, earthquakes, etc) and studying how things failed. NYSE had tandem computers in a carefully selected datacenter in Manhattan with carefully selected "diverse routing" ... power feed from multiple substations into bldg from different sides & routes, water from multiple different water mains, telco from multiple central offices into bldg from different sides & routes. They lost service when power transformer in the basement blew, spewed PCB all through the bldg, required evacuation and bldg shutdown.
Goes back to my undergraduate days when I had been hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services. I thought renton datacenter was possibly largest in the world (couple hundred million in 360s, 60s $$$). There was disaster scenario where Mt Rainier heats up and the mud flow takes out Renton datacenter ... so they were going to replicate the Renton datacenter up at the new 747 plant in Everett (geographic survivability) ... the business case was that the cost to Boeing to be w/o the Renton datacenter for a week was more than the cost of the Renton datacenter (geographically replicated datacenters).
HA/CMP
https://www.garlic.com/~lynn/subtopic.html#hacmp
original sql/relational RDBMS, System R
https://www.garlic.com/~lynn/submain.html#systemr
continuous availability posts
https://www.garlic.com/~lynn/submain.html#available
Renton datacenter replication
https://www.garlic.com/~lynn/2001.html#36 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
https://www.garlic.com/~lynn/2008s.html#74 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2010c.html#89 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010h.html#33 45 years of Mainframe
https://www.garlic.com/~lynn/2010k.html#18 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010l.html#51 Mainframe Hacking -- Fact or Fiction
https://www.garlic.com/~lynn/2010q.html#59 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2011h.html#61 Do you remember back to June 23, 1969 when IBM unbundled
https://www.garlic.com/~lynn/2011l.html#37 movie "Airport" on cable
https://www.garlic.com/~lynn/2012.html#42 Drones now account for one third of U.S. warplanes
https://www.garlic.com/~lynn/2012n.html#60 The IBM mainframe has been the backbone of most of the world's largest IT organizations for more than 48 years
https://www.garlic.com/~lynn/2013.html#7 From build to buy: American Airlines changes modernization course midflight
https://www.garlic.com/~lynn/2013d.html#50 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013o.html#18 Why IBM chose MS-DOS, was Re: 'Free Unix!' made30yearsagotoday
https://www.garlic.com/~lynn/2014c.html#31 How many EBCDIC machines are still around?
https://www.garlic.com/~lynn/2014d.html#32 [OT ] Mainframe memories
https://www.garlic.com/~lynn/2014e.html#9 Boyd for Business & Innovation Conference
https://www.garlic.com/~lynn/2014e.html#19 The IBM Strategy
https://www.garlic.com/~lynn/2014e.html#23 Is there any MF shop using AWS service?
https://www.garlic.com/~lynn/2014l.html#88 IBM sees boosting profit margins as more important than sales growth
https://www.garlic.com/~lynn/2015.html#33 Why on Earth Is IBM Still Making Mainframes?
https://www.garlic.com/~lynn/2015.html#75 Ancient computers in use today
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
https://www.garlic.com/~lynn/2015h.html#100 OT: Electrician cuts wrong wire and downs 25,000 square foot data centre
https://www.garlic.com/~lynn/2016c.html#10 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016c.html#17 Globalization Worker Negotiation
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2017.html#46 Hidden Figures and the IBM 7090 computer
https://www.garlic.com/~lynn/2017c.html#14 Check out Massive Amazon cloud service outage disrupts sites
https://www.garlic.com/~lynn/2017d.html#14 Perry Mason TV show--bugs with micro-electronics
https://www.garlic.com/~lynn/2017d.html#90 Old hardware
https://www.garlic.com/~lynn/2017g.html#60 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017j.html#83 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017j.html#104 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017k.html#58 Failures and Resiliency
https://www.garlic.com/~lynn/2018.html#28 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2018e.html#29 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2019.html#54 IBM bureaucracy
https://www.garlic.com/~lynn/2019b.html#38 Reminder over in linkedin, IBM Mainframe announce 7April1964
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: Airline Reservation System Date: 27 Jan 2021 Blog: Facebookre:
trivia: 1994 (after leaving IBM) was asked into AA sabre to look at ten things they couldn't do ... they started with ROUTES (25% of CPU workload) and gave me a complete copy of OAG (all scheduled flts and airports in the world). I came back two months later with ROUTES rewritten in C running on RS/6000 ... doing all their impossible things ... including able to scale that it could handle every ROUTE request for every airline in the world. then the hand-wringing started ... the existing implementation had technology design trade-offs from the 60s ... which required a lot of manual converting OAG so it could be handled by the 60s implementation (upwards of 800 people). Starting from scratch with 90s trade-offs, I could effectively use OAG directly (w/o those 800 people). They eventually said they actually didn't want me to redo it, they just wanted to be able to tell the parent company board I was working on it (apparently one of the board members had been at IBM in the past and knew me).
some airline reservation posts
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2016.html#75 American Gripen: The Solution To The F-35 Nightmare
https://www.garlic.com/~lynn/2016c.html#34 Qbasic
https://www.garlic.com/~lynn/2016e.html#93 Delta Outage
https://www.garlic.com/~lynn/2016e.html#98 E.R. Burroughs
https://www.garlic.com/~lynn/2016f.html#109 Airlines Reservation Systems
https://www.garlic.com/~lynn/2016g.html#38 LIFE magazine 1945 "Thinking machines" predictions
https://www.garlic.com/~lynn/2016g.html#44 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2017b.html#9 The ICL 2900
https://www.garlic.com/~lynn/2017d.html#0 IBM & SABRE
https://www.garlic.com/~lynn/2017h.html#80 The IBM Appeal - when is a pensions promise not a promise?
https://www.garlic.com/~lynn/2017h.html#97 Business as Usual: The Long History of Corporate Personhood
https://www.garlic.com/~lynn/2017h.html#98 endless medical arguments, Disregard post (another screwup)
https://www.garlic.com/~lynn/2017h.html#100 'X' Marks the Spot Where Inequality Took Root: Dig Here
https://www.garlic.com/~lynn/2017i.html#78 F-35 Multi-Role
https://www.garlic.com/~lynn/2017k.html#60 SABRE after the 7090
https://www.garlic.com/~lynn/2017k.html#67 SABRE after the 7090
https://www.garlic.com/~lynn/2018d.html#22 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2018f.html#86 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2018f.html#118 The Post-IBM World
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2019d.html#118 Armed with J-20 stealth fighters, China's future flattops could 'eventually fight US carriers'
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#75 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: IBM & Apple Date: 27 Jan 2021 Blog: FacebookFirst part of 80s, my brother was regional Apple marketing rep (largest physical region CONUS) ... and when he came into town I could be invited to business dinners ... I got to argue MAC design with the MAC developers, even before the MAC was announced. He had also figured out how to dial into the IBM S/38 that ran the business ... to track manufacturing and delivery schedules.
Story is that IBM started its downhill slide with the failure of
Future System fiasco (although there was so much momentum that it took
quite awhile before the dinosaur collapses) ... Ferguson & Morris,
"Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
.... reference
to the "Future System" project 1st half of the 70s:
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with SYNCOPHANCY and MAKE
NO WAVES under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat
...
But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive.
... snip ...
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
Late 80s, a senior disk engineer gets a talk scheduled at annual,
world-wide, communication group, internal IBM conference, supposedly
on 3174 performance, but opens the talk with statement that the
communication group was going to be responsible for the demise of the
disk division. The issue was that the communication group had
stranglehold on datacenters with strategic ownership of everything
that crossed datacenter walls and were fiercely fighting off
client/server and distributed computing trying to preserve their dumb
terminal paradigm. The disk division was seeing customers moving data
to more distributed computing friendly platforms with a drop in disk
sales. The disk division had come up with several solutions, but they
were constantly being vetoed by the communication group. The
communication group datacenter stranglehold wasn't just killing disk
sales but affecting the whole mainframe business and a few short years
later, IBM has gone into the red and was being reorganized into the 13
"baby blues" in preparation for breaking up the company ... gone
behind paywall, but mostly lives free at wayback machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
communication group & dumb terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
We had left IBM, but get a call from the bowels of Armonk asking if we could help with the corporate breakup. Lots of business units were using MOUs for supplier contracts in other business units ... which would be in different companies after the breakup ... all those MOUs would have to be cataloged and turned into their own supplier contracts. Before we get started, a new CEO is brought in that reverses the breakup.
Along the way, former co-workers were complaining that IBM executives weren't running the business, they were totally focused on moving following year expenses into the current year ... so we ask our contact in Armonk. He says company executives (470?some) won't get bonuses for the current year in the red (regardless of how deep into the red), but if they can nudge the following year just a little into the black, the way the bonus plan was written, they would get bonuses more than twice as large as any previous bonus (i.e. effectively rewarded for taking the company into the red).
Since then the company has had a strong history of executive financial
engineering ... like stock buybacks, Stockman in "The Great
Deformation: The Corruption of Capitalism in America"
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
IBM downfall/downturn posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
IBM breakup and MOU posts
https://www.garlic.com/~lynn/2014d.html#8 Microsoft culture must change, chairman says
https://www.garlic.com/~lynn/2014d.html#55 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014d.html#70 Last Gasp For Hard Disk Drives
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014f.html#54 IBM Sales Fall Again, Pressuring Rometty's Profit Goal
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014m.html#90 Is IBM Suddenly Vulnerable To A Takeover?
https://www.garlic.com/~lynn/2014m.html#143 LEO
https://www.garlic.com/~lynn/2015.html#81 Ginni gets bonus, plus raise, and extra incentives
https://www.garlic.com/~lynn/2015d.html#42 Remember 3277?
https://www.garlic.com/~lynn/2016d.html#89 China builds world's most powerful computer
https://www.garlic.com/~lynn/2016e.html#15 Leaked IBM email says cutting 'redundant' jobs is a 'permanent and ongoing' part of its business model
https://www.garlic.com/~lynn/2016e.html#97 IBM History
https://www.garlic.com/~lynn/2016e.html#108 Some (IBM-related) History
https://www.garlic.com/~lynn/2016f.html#29 Samsung's million-IOPS, 6.4TB, 64Gb/s SSD is ... well, quite something
https://www.garlic.com/~lynn/2016g.html#20 How to Fix IBM
https://www.garlic.com/~lynn/2017.html#62 Big Shrink to "Hire" 25,000 in the US, as Layoffs Pile Up
https://www.garlic.com/~lynn/2017b.html#40 Job Loyalty
https://www.garlic.com/~lynn/2017d.html#5 IBM's core business
https://www.garlic.com/~lynn/2017d.html#19 Mainframes are used increasingly by major banks and financial institutions
https://www.garlic.com/~lynn/2017f.html#109 IBM downfall
https://www.garlic.com/~lynn/2017g.html#105 Why IBM Should -- and Shouldn't -- Break Itself Up
https://www.garlic.com/~lynn/2017h.html#67 IBM: A History Of Progress, 1890s to 2001
https://www.garlic.com/~lynn/2017j.html#80 Here's why Warren Buffett is unloading IBM stock
https://www.garlic.com/~lynn/2017k.html#8 IBM Mainframe
https://www.garlic.com/~lynn/2017k.html#34 Bad History
https://www.garlic.com/~lynn/2018b.html#63 Major firms learning to adapt in fight against start-ups: IBM
https://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2018d.html#13 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2018d.html#39 IBM downturn
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018e.html#28 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2018f.html#112 The Post-IBM World
https://www.garlic.com/~lynn/2019c.html#59 The rise and fall of IBM
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: IBM Travel Date: 27 Jan 2021 Blog: FacebookFor various transgressions in the early 80s, I was transferred from San Jose Research to Yorktown Research ... but left to live in San Jose (with offices and labs at various IBM locations around San Jose) ... having to commute to YKT a couple times a month. Work in San Jose mondays, take red-eye out of SFO to Kennedy, drive directly to YKT (bright and early Tuesday) ... and then back to SFO friday afternoon. Start out TWA ... they go under, loose my miles and switch to PanAm. PanAm desides to concentrate on Atlantic, gives up the west coast and sells off its Pacific planes to United (sometimes recognized former PanAm 747 being flown by United) ... switch to American (eventually passed million AA miles).
transgressions included being blamed for online computer conferencing
(precursor to modern social media) on the internal network ... cmc
posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
Along the way, Bert Moldow (who was teaching networking at IBM SRI in Manhatten) wanted me to give a day's talk on HSDT activity ... he couldn't choose a day when I was NY ... so got red eye, got to SRI before it opens, teach the class, afternoon back to Kennedy; less than 24hrs from walking out the door at home to walking back in.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HSDT was T1 and faster computer links ... we were also working with
NSF director and was suppose to get $20M to interconnect the NSF
supercomputer systems ... then congress cuts the budget, some other
things happen and eventually NSF releases RFP (in part based on what
we already had running) ... old post with 28Mar1986 preliminary
release.
https://www.garlic.com/~lynn/2002k.html#12
Internal politics prevent us from bidding and the NSF director tries
to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other agencies, but that just makes the internal politics
worse (as does comment that what we already had running was at least
5yrs ahead of all RFP responses). As regional networks connect into
the centers, it grows into the NSFNET backbone (precursor to modern
internet)
https://www.technologyreview.com/s/401444/grid-computing/
NSF network posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
One of the problems, corporate required internal network links have encryption ... and it was really hard to find link encryptors faster than T1 ... so HSDT links carrying real data were essentially restricted to T1. trivia ... about the same time, major link encryptor vendor claimed that over half the link encryptors in the world were on the IBM internal network ... since nearly all internal network links (but HSDT) were communication group products, which topped out at 56kbits/sec ... they were effectively 56kbits or less link encryptors. I did get involved in doing link encryptors much faster than T1 ... but that is another story (involving certain gov. agency).
posts mentioning three kinds of crypto:
https://www.garlic.com/~lynn/2008h.html#87 New test attempt
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars
https://www.garlic.com/~lynn/2009p.html#32 Getting Out Hard Drive in Real Old Computer
https://www.garlic.com/~lynn/2010i.html#27 Favourite computer history books?
https://www.garlic.com/~lynn/2010o.html#43 Internet Evolution - Part I: Encryption basics
https://www.garlic.com/~lynn/2011g.html#20 TELSTAR satellite experiment
https://www.garlic.com/~lynn/2011g.html#60 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011h.html#0 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2011n.html#63 ARPANET's coming out party: when the Internet first took center stage
https://www.garlic.com/~lynn/2011n.html#85 Key Escrow from a Safe Distance: Looking back at the Clipper Chip
https://www.garlic.com/~lynn/2012.html#63 Reject gmail
https://www.garlic.com/~lynn/2012i.html#70 Operating System, what is it?
https://www.garlic.com/~lynn/2012k.html#47 T-carrier
https://www.garlic.com/~lynn/2013g.html#31 The Vindication of Barb
https://www.garlic.com/~lynn/2013i.html#69 The failure of cyber defence - the mindset is against it
https://www.garlic.com/~lynn/2013k.html#77 German infosec agency warns against Trusted Computing in Windows 8
https://www.garlic.com/~lynn/2013k.html#88 NSA and crytanalysis
https://www.garlic.com/~lynn/2013o.html#50 Secret contract tied NSA and security industry pioneer
https://www.garlic.com/~lynn/2014.html#9 NSA seeks to build quantum computer that could crack most types of encryption
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014e.html#25 Is there any MF shop using AWS service?
https://www.garlic.com/~lynn/2014e.html#27 TCP/IP Might Have Been Secure From the Start If Not For the NSA
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014j.html#77 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015f.html#39 GM to offer teen driver tracking to parents
https://www.garlic.com/~lynn/2015h.html#3 PROFS & GML
https://www.garlic.com/~lynn/2016.html#101 Internal Network, NSFNET, Internet
https://www.garlic.com/~lynn/2016e.html#31 How the internet was invented
https://www.garlic.com/~lynn/2016f.html#106 How to Win the Cyberwar Against Russia
https://www.garlic.com/~lynn/2016h.html#0 Snowden
https://www.garlic.com/~lynn/2017b.html#44 More on Mannix and the computer
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017g.html#35 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017g.html#91 IBM Mainframe Ushers in New Era of Data Protection
https://www.garlic.com/~lynn/2018.html#10 Landline telephone service Disappearing in 20 States
https://www.garlic.com/~lynn/2018d.html#33 Online History
https://www.garlic.com/~lynn/2019b.html#23 Online Computer Conferencing
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler lynn@garlic.com Subject: IBM Kneecapping products Date: 29 Jan 2021 Blog: FacebookAs I frequently post, communication group had corporate strategic ownership of everything that crossed the datacenter walls and were fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm and install base.
They were also kneecapping products that did get out, typical instance PS2 16mbit token-ring microchannel card had lower throughput than PC/RT 4mbit token-ring (at-bus) card.
801/risc, romp, rios, rs/6000, power, etc posts
https://www.garlic.com/~lynn/subtopic.html#801
They were also fiercely fighting the release of mainframe TCP/IP support ... but lost and then switched that product had to be released through them (since they had strategic *STRANGLEHOLD* on everything that crossed datacenter walls); what shipped got aggregate 44kbyte/sec using 3090 processor. I did RFC1044 enhancement and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
In late 80s, senior disk engineer got talk scheduled at annual, world-wide, communication group internal conference; supposedly on 3174 performance ... but opened with the communication group was going to be responsible for demise of the disk division; ... they were seeing customer data fleeing IBM datacenters to more distributed computing friendly platforms with drop in disk sales ... and all their attempts at correcting the situation were being vetoed by the communication group (*STRANGLEHOLD*).
dumb terminal posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
This was seriously affecting the whole IBM mainframe market and a
couple years later, IBM had gone into the red ... and was being
reorg'ed into the 13 "baby blues" in preparation for breaking up the
company ... gone behind paywall, but mostly lives free at wayback
machine
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
This was about the time the communication group had hired a silicon valley contractor to implement TCP/IP directly in VTAM. What he initially demo'ed ran significantly faster than LU6.2 ... he was then told that "ev