List of Archived Posts
2024 Newsgroup Postings (11/15 - )
- Any interesting PDP/TECO photos out there?
- Origin Of The Autobaud Technique
- Origin Of The Autobaud Technique
- IBM CKD DASD
- IBM Transformational Change
- IBM Transformational Change
- IBM 5100
- IBM 5100
- IBM Transformational Change
- 4th Generation Programming Language
- Signetics 25120 WOM
- 4th Generation Programming Language
- 4th Generation Programming Language
- ARPANET And Science Center Network
- ARPANET And Science Center Network
- ARPANET And Science Center Network
- ARPANET And Science Center Network
- 60s Computers
- PS2 Microchannel
- 60s Computers
- The New Internet Thing
- The New Internet Thing
- IBM SE Asia
- IBM Move From Leased To Sales
- 2001/Space Odyssey
- Taligent
- IBM Move From Leased To Sales
- IBM Unbundling, Software Source and Priced
- IBM Unbundling, Software Source and Priced
- Computer System Performance Work
- Computer System Performance Work
- What is an N-bit machine?
- What is an N-bit machine?
- SUN Workstation Tidbit
- IBM and Amdahl history (Re: What is an N-bit machine?)
- IBM and Amdahl history (Re: What is an N-bit machine?)
- What is an N-bit machine?
- IBM Mainframe User Group SHARE
- IBM Mainframe User Group SHARE
- Applications That Survive
- We all made IBM 'Great'
- We all made IBM 'Great'
- Back When Geek Humour Was A New Concept To Me
- Apollo Computer
Any interesting PDP/TECO photos out there?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Any interesting PDP/TECO photos out there?
Newsgroups: alt.folklore.computers
Date: Fri, 15 Nov 2024 17:45:58 -1000
Lynn Wheeler <lynn@garlic.com> writes:
the 360/67 came in within a year of taking intro class and univ. hires
me fulltime responsible of os/360 (tss/360 never came to fruition so ran
as 360/65 with os/360, I continue to get the machine room dedicated for
weekends). Student fortran ran under a second on 709 but initially over
a minute with os/360. I install HASP cutting the time in half. I then
start redoing stage2 sysgen to carefully place datasets and PDS members
to optimize arm seek and multi-track search, cutting another 2/3rds to
12.9secs. Student fortran never got better than 709 until i install
Univ. of Waterloo WATFOR.
re:
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
whole still at univ, CSC came out to install CP67 (3rd after CSC
itself and MIT Lincoln Labs) ... and I mostly played with it during my
weekend dedicated time. Initially I spent most of the time reWriting
CP67 pathlengths to improve OS/360 running in virtual machine,
OS/360 test stream ran 322secs on real machine, but initially 856secs
in virtual machine (534secs CP67 CPU). After a couple months I
got CP67 CPU down to 113secs (from 534) ... and was asked to attend
CP67 "official" announcement at spring '68 SHARE meeting in Houston
... where I gave presentations on both (earlier) OS/360 optimization
and (more recent) CP67 optimization work (running OS/360 invirtual
machine).
I then rewrite I/O, ordered arm seek queuing (in place of FIFO) and
multiple 4k page transfers I/O, optimized transfers/revolution for
2314 (disk) and 2301 (drum, from 70-80/sec to 270/sec peak), optimized
page replacement, dynamic adaptive resource management and scheduling
(for multi-user CMS interactive).
CP67 initially had 1052&2741 terminal support with automagic terminal
type identification. Univ. had some asciii (mostly tty33 ... but some
tty35) ... so added tty/ascii support (including integrated with
terminal type identification). I then wanted to have single dail-in
phone number for all terminal types ("hunt group"). Didn't quiet work,
since IBM telecommunication controller had taken short cut and
hard-wired terminal line speed.
This kicks off univ clone telecommunication project, building 360
channel interface board for Interdata/3 programmed to simulate IBM
controller with the addition of being able to do dynamic line
speed. Later it was upgraded with Interdata/4 for channel interface
and cluster of Interdata/3s for line(/port) interfaces.
was then sold as 360 clone controller by Interdata and later
Perkin-Elmer, and four of us get written up for (some part of) IBM
clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
other ascii info ... originally 360 was suppose to be ascii machine, but
ascii unit record was ready yet, so it was "temporary" going to be
EBCDIC with old BCD machines. "Biggest Computer Goof Ever":
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other history
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/History_of_CP/CMS
above (currently) has confusion about Future System and Gene
Amdahl. Amdahl had won the battle to make ACS, 360 compatible ... and
then leaves IBM when ACS/360 was killed
https://people.computing.clemson.edu/~mark/acs_end.html
... before Future System started.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://en.wikipedia.org/wiki/History_of_CP/CMS#Historical_notes
FS was completely different than 370 and was going to be completely
replace 370, during FS, 370 projects were being killed off, the lack
of new 370 products during the FS period is credited with giving clone
370 makers (including Amdahl) their market foothold. When FS finally
implodes there is mad rush to get stuff back into the 370 product
pipelines, including quick and dirty 3033&3081 efforts in parallel
... more information
http://www.jfsowa.com/computer/memo125.htm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
A decade ago, I was asked to track down the executive decision to add
virtual memory to all 370s and found staff member reporting to the
executive. Basically the (OS/360) MVT storage management was so bad
that regions typically had to be specified four times later than used
... so that standard 1mbyte 370/165 only ran four regions
concurrently, insufficient to keep the system busy and justified.
Mapping MVT to 16mbyte virtual memory (VS2/SVS) allowed concurrent
regions to be increased by factor of four times (with cap of 15 with
4bit storage protect keys, unique key for each concurrent running
region) with little or no paging (sort of like running MVT in a CP67
16mbyte virtual machine).
pieces of that email exchange
https://www.garlic.com/~lynn/2011d.html#73
--
virtualization experience starting Jan1968, online at home since Mar1970
Origin Of The Autobaud Technique
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Origin Of The Autobaud Technique
Newsgroups: alt.folklore.computers
Date: Fri, 15 Nov 2024 21:52:20 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Instead, the docs said you just had to press RETURN once or twice, and the
terminal driver would automatically detect the right line speed and pop up
a nice, legible login prompt. In practice, I don't recall ever having to
press RETURN more than once. It just seemed like magic. (We had fewer TV
channels in those days...)
... mentioned it in post to
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
thread a few hrs ago ... in the late 60s, did it for clone ibm 360
telecommunication controller we built using Interdata/3 machine
(upgraded to Interdata/4 with cluster of Interdata/3s) ... that
Interdata and then Perkin/Elmer sold.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
initial (virtual machine) CP/67 delivered to univ had
1050&2741 terminal type support with automagic terminal type
recognition. Univ. had ascii TTY (mostly 33, but some 35), so I added
ascii terminal support integrated with automagic terminal type
recognition (able to use the SAD CCW to switch the terminal type line
scanner for each line/port). I then wanted a single dial-in number
("hunt group") for all terminal types ... but while the terminal type
line scanner could be switched for each port, IBM had hard-wired the
port line speed ... thus kicked-off the univ project to build our own
clone controller that also did "autobaud".
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
Origin Of The Autobaud Technique
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Origin Of The Autobaud Technique
Newsgroups: alt.folklore.computers
Date: Fri, 15 Nov 2024 22:08:35 -1000
Lynn Wheeler <lynn@garlic.com> writes:
initial (virtual machine) CP/67 delivered to univ had 1050&2741
terminal type support with automagic terminal type
recognition. Univ. had ascii TTY (mostly 33, but some 35), so I added
ascii terminal support integrated with automagic terminal type
recognition (able to use the SAD CCW to switch the terminal type line
scanner for each line/port). I then wanted a single dial-in number
("hunt group") for all terminal types ... but while the terminal type
line scanner could be switched for each port, IBM had hard-wired the
port line speed ... thus kicked-off the univ project to build our own
clone controller that also did "autobaud".
re:
https://www.garlic.com/~lynn/2024g.html#1 Origin Of The Autobaud Technique
trivia: turn of century had tour of datacenter that handled majority
of dial-up POS credit card swipe terminal calls east of the
mississippi ... the telecommunication controller was descendant of
what we had done in the 60s ... some question that the mainframe
channel interface card was same design we had done more than three
decades earlier.
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM CKD DASD
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CKD DASD
Date: 17 Nov, 2024
Blog: Facebook
CKD DASD for original os/360 (not just disks, but things also 230x
"drums" and 2321 "data cell") .... "fixed-block architecture" was
introduced in the late 70s and all IBM CKD increasingly became CKD
emulated on fixed-block disk (can be seen in 3380 formulas for
records/track where record size has to be rounded up to a multiple of
fixed cell size). Currently "CKD" is still required even though no CKD
disks have been made for decades (not even emulated, everything being
simulated on industry standard fixed-block).
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives
ECKD channel program architecture was original introduced with
"Calypso" ... 3880 disk controller speed matching buffer, allowing
3380 3mbyte/sec disks to be attached to 370 1.5mbyte/sec channels.
1973, IBM 3340 "winchester"
https://www.computerhistory.org/storageengine/winchester-pioneers-key-hdd-technology/
trivia: when I 1st transfer to San Jose Research in 1977, I got to
wander around IBM (and non-IBM) datacenters in silicon valley,
including disk bldg14/engineering and bldg15/product-test across the
street. They were running 7x24 prescheduled, stand alone testing and
had said they had recently tried MVS, but it had 15min MTBF (in that
environment, required manual re-ipl). I offer to rewrite input/output
supervisor to make it bullet proof and never fail, allowing any amount
of on-demand, concurrent testing, greatly improving productivity
(downside was they wanted me to spend increasing amount of time
playing disk engineer). Bldg15 tended to get very early engineering
processors for I/O testing ... and when they got the 1st engineering
3033 off the POK engineering flr, found that disk testing only took
percent or two of CPU. We scrounge up a 3830 disk controller and
string of 3330 drives for setting up our own private online service.
Person doing air-bearing simulation (part of thin-film head design)
https://en.wikipedia.org/wiki/Thin-film_head#Thin-film_heads
was getting a few turn-arounds a month from the SJR 370/195 (even with
high priority designation). We set air-bearing simulation up on the
bldg15 3033 (only about half the MIPs of 195) and was able to get
several turn-arounds a day.
other trivia: original 3380 had 20 (data) track spacings between each
data track. They then cut the spacing in half for double the original
capacity and then cut the spacing again for triple the capacity.
Mid-80s, the father of 801/RISC technology wants me to help him with
idea for "wide" disk head ... transfers 16 closely-spaced disk tracks
(bracketed with servo-track on each side) in parallel. Problem was IBM
3090, its channels were 3mbyte/sec while "wide-head" required
50mbyte/sec.
Then in 1988, IBM branch office asked me if I could help LLNL
standardize some serial stuff they were working with. It quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec.
IBM mainframe eventually ships some of their serial stuff in 90s as
ESCON (17mbytes/sec), when it was already obsolete. Later some POK
engineers become involved with FCS and define a heavy-weight protocol
that radically reduces the native throughput, eventually ships as
FICON. Most recent public published benchmark I've found, is 2010 z196
"Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS
was announced for (Intel) E5-2600 server blade claiming over million
IOPS (two such FCS with higher throughput than 104 FICON). Note that
IBM pubs recommend that SAPs (system assist processors that do actual
I/O) be capped at 70% CPU ... which would drop z196 throughput from
"Peak I/O" 2M IOPS to 1.5M IOPS.
Around 1988, Nick Donofrio had approved HA/6000 proposal, initially
for NYTimes to migrate their newspaper system (ATEX) off (DEC)
VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (already involved w/LLNL for "FCS") and commercial cluster
scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres, who
have VAXcluster support in same source base with unix). Early JAN1992,
have meeting with Oracle CEO, where AWD/Hester tells Ellison we would
have 16-system clusters by mid1992 and 128-system clusters by
ye1992. Then mid-Jan, I update FSD about work with national labs
... and FSD then tells Kingston supercomputer group they would be
going with HA/CMP for the gov. (supercomputing). Then late JAN1992,
cluster scale-up is transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told we can't work with
anything that has more than four processors (we leave IBM a few months
later). Possibly contributing, was mainframe DB2 group were
complaining that if we were allowed to proceed, it would be years
ahead of them.
1993 (count of program benchmark iterations compared to reference
platform):
ES/9000-982 (8 processors) : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; cluster scale-up 16-system/2016MIPS,
128-system/16,128MIPS
IBM CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
posts mentionin calypso and eckd
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2018.html#81 CKD details
https://www.garlic.com/~lynn/2015g.html#15 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High performance
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2007e.html#40 FBA rant
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Transformational Change
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Transformational Change
Date: 18 Nov, 2024
Blog: Facebook
Late 80s, senior disk engineer got talk scheduled at internal, annual,
world-wide communication group conference supposedly on 3174
performance, but opened his talk with statement that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing a drop in disk sales with data
fleeing (mainframe) datacenters to more distributed computing friendly
platforms and had come up with a number of solutions which were
constantly vetoed by the communication group (with their corporate
strategic ownership of everything that crossed datacenter walls,
fiercely fighting off client/server and distributed computing trying
to preserver their dumb terminal paradigm).
The communication group stanglehold on mainframe datacenters weren't
just disks and a few years later, IBM has one of the largest losses in
the history of US companies and was being reorg'ed into the 13 "baby
blues" (take-off on AT&T "baby bells" breakup a decade earlier) in
preperation for breakup of the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
senior disk divsion executive partial countermeasure had been
investing in distributed computing startups that would use IBM disks
... and would periodically ask us to stop in on some of his
investments to see if we could provide any help.
Before leaving IBM, we would drop in on an executive I'd known since
the 70s (with a top floor corner office in Somers) and would also stop
by other people in the building and talk abut the changes in the
computer market and mostly they could articulate necessary IBM
changes. Visits over a period of time showed nothing had changed
(conjecture that they were trying to maintain IBM status quo until
their retirement).
posts mentioning communication group fighting off client/server and
distributed computing trying to preserve their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Two decades earlier, Learson trying (&failed) to block the
bureaucrats, careerists and MBAs from destroying Watsons'
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Transformational Change
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Transformational Change
Date: 18 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#4 IBM Transformational Change
The last product we did at IBM was HA/CMP. Nick Donofrio approved
HA/6000, initially for NYTimes to move their newspaper system from
(DEC) VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (including LLNL which I had already worked with for fibre-channel
standard and some other things) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Informix, Ingres that had VAXCluster
support in the same source base with Unix). The S/88 product
administrator also starts taking us around to their customers and gets
me to write a section for the corporate continuous availability
strategy document (it gets pulled when both Rochester/AS400 and
POK/mainframe complain).
One of the San Jose distributed computing investments was Mesa
Archival (spin-off of NCAR supercomputer system in Boulder) including
port to HA/CMP, another was porting LLNL's UNICOS LINCS supercomputer
system to HA/CMP.
Early JAN1992, in meeting with Oracle CEO, AWD/Hester tells Ellison
that we would have 16-system cluster mid92 and 128-system cluster
ye92. I then update FSD with the HA/CMP work with national labs
... and they tell Kingston supercomputing project that FSD was going
with HA/CMP for gov. supercomputer. Late JAN1992, cluster scale-up is
transferred for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we are told we weren't allowed to
work with anything that had more than four processors (we leave IBM a
few months later). Possibly contributing was mainframe DB2 complaining
that if we were allowed to go ahead, it would be years ahead of them.
1993 (MIPS benchmark, not actual instruction count but number of
program iterations compared to reference platform):
ES/9000-982 (8 processors) : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; cluster scale-up 16-system/2016MIPS,
128-system/16,128MIPS
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 5100
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 19 Nov, 2024
Blog: Facebook
IBM 5100
https://en.wikipedia.org/wiki/IBM_5100
Trivia: Ed and I transferred to SJR from CSC in 1977 and I got to
wander around lots of silicon valley datacenters. One of my hobbies
after joining IBM/CSC was enhanced production operating systems for
internal datacenters and HONE was long time customer. In mid-70s, the
US HONE datacenters were consolidated in Palo Alto (across the back
parking lot from PASC which had moved slightly up the hill from
Hanover, trivia when facebook 1st moved into silicon valley, it was
into new bldg built next door to the former US HONE consolidated
datacenter) and I spent some of my time there.
NOTE: after 23Jun1969 unbundling announcement, HONE was created to
give branch office SEs online practice with guest operating system
running in CP67 virtual machines. CSC had also ported APL\360 to CMS
for CMS\APL (redoing storage management from 16kbyte swapped
workspaces to large virtual memory demand paged workspaces and APIs
for system services like file I/O, enabling lots of real world apps)
and HONE started using it for online sales&marketing support apps (for
DPD branch offices, regional offices and hdqtrs) which eventually come
to dominate all HONE use (and HONE clones started sprouting up all
over the world, my 1st IBM overseas business trips were to Paris and
Tokyo for HONE install). PASC then did APL\CMS for VM370 and HONE
after moving to VM370 leveraging use of PASC APL expertise (HONE had
become the largest use of APL in the world).
trivia: PASC did the 370/145 APL microcode assist ... claim was it ran
APL with throughput of 370/168.
... note it wasn't long before nearly all hardware orders had to be
first processed by a HONE APL app before submission.
Los Gatos also gave me part of a wing with offices and lab space and I
did HSDT project there (T1 and faster computer links, both satellite
and terrestrial, had TDMA Ku-band satellite system with 4.5m dishes in
Los Gatos and Yorktown and 7m dish in austin, Austin used the link for
sending RIOS chip designs to hardware logic simulator/verifier in San
Jose ... claiming it helped bring in RS/6000 design a year early). Los
Gatos also had the IBMer responsible for magstripe (showed up on ATM
cards) and developed ATM machine (in the basement still was vault
where they had kept cash from all over the world for testing, also
related early ATM machine across from fast food and kids would feed
tomato packets into the card slot)
At SJR I worked with Jim Gray and Vera Watson on original
SQL/relational System/R .... and the Los Gatos VLSI lab had me help
with a different kind of relational that they used with VLSI chip
design, "IDEA" ... that was done with Sowa (who was then down at STL)
http://www.jfsowa.com/
trivia: some of my files on garlic website are maintained with a
IDEA-like RDBMS that I had redone from scratch after leaving IBM.
I was also blamed for online computer conferencing in the late 70s and
early 80s on the internal network. It really took off spring of 1981,
when I distributed trip report of visit to Jim Gray at Tandem (who had
left SJR the fall before), folklore was when corporate executive
committee was told, 5of6 wanted to fire me. Apparently for online
computer conferencing and other transgressions, I was transferred to
YKT, but left to live in San Jose, offices in SJR/Almaden, LSG, etc
... but had to commute to YKT every couple weeks (monday in san jose,
SFO redeye to Kennedy, bright and early Tues in YKT, return friday
afternoon).
Almaden research mid-80s on eastern hill of almaden valley ... Los
Gatos lab was other side of western hill from almaden valley on the
road to San Jose dump. LSG had T3 collins digital radio (microwave) on
the hill above lab with line-of-site to the roof of bldg12 on main
plant site. HSDT got t1 circuits from Los Gatos to various places in
San Jose plant. One was tail circuit to IBM C-band T3 satellite system
connecting to Clementi's E&S lab in Kingston that had whole boatloads
of Floating System boxes.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
original sql/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
some past 5100 posts
https://www.garlic.com/~lynn/2024f.html#45 IBM 5100 and Other History
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022c.html#86 APL & IBM 5100
https://www.garlic.com/~lynn/2022.html#103 Online Computer Conferencing
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2018f.html#52 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018d.html#116 Watch IBM's TV ad touting its first portable PC, a 50-lb marvel
https://www.garlic.com/~lynn/2018b.html#96 IBM 5100
https://www.garlic.com/~lynn/2017c.html#7 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2010c.html#28 Processes' memory
https://www.garlic.com/~lynn/2005m.html#2 IBM 5100 luggable computer with APL
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 5100
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 19 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#6 IBM 5100
one of the other places I got to wander was disk bldgs 14
(engineering) and 15 (product test) across the street from
sjr/28. They were running 7x24, prescheduled, stand-alone test and
mentioned they had recently tried MVS (but it had 15min MTBF requiring
manual re-ipl in that environment). I offered to rewrite I/O
supervisor to be bullet proof and never fail ... so they could do any
amount of on-demand testing, greatly improving productivity (downside,
they wanted to spend increasing time playing disk engineer). Later I
write a (internal only) research report about the I/O integrity work
and happened to mention MVS 15min MTBF ... bringing down the wrath of
MVS organization on my head.
Bldg15 would get very early engineering processors, and got something
like 3rd or 4th 3033 machined. It turned out testing only took a
percent or two of CPU, so we scrounge up 3830 disk controller and
string of 3330 drives and setup our private online service. About that
time somebody was doing air bearing simulation (part of designing thin
film disk head, initially for 3370) on the SJR 370/195 but were only
getting a few turn arounds a month. We set it up on bldg15/3033 and
could get multiple turn arounds a day (even tho 3033 was only about
half processing of 195). Also ran 3270 coax under the street from
bldg15 to my SJR office in 028.
1980 STL (since renamed SVL) was bursting at the seams and moving 300
people (and 3270s) from the IMS group to offsite bldg. They had tried
"remote" 3270 support, but found human factors totally unacceptable. I
get con'ed into doing channel extender support so they can place
channel attached 3270 controllers at offsite bldg (with no perceived
human factors between off-site and in STL). There was then an attempt
to release my support to customers, but there was a group in POK
playing with some serial stuff and got it vetoed (afraid that if it
was in the market, it would make it harder to release their stuff).
1988, IBM branch office asks if I could help LLNL standardize some
serial stuff they were working with it, it quickly becomes Fibre
Channel Standard (FCS), including some stuff I had done in 1980
... initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec. Then
POK gets their stuff released in the 90s with ES/9000 as ESCON, when
it is already obsolete (17mbytes/sec).
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Transformational Change
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Transformational Change
Date: 19 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#4 IBM Transformational Change
https://www.garlic.com/~lynn/2024g.html#5 IBM Transformational Change
Early 80s, I get HSDT project, T1 and faster computer links (both
satellite and terrestrial), some amount of conflicts with
communication group. Note, in the 60s, IBM had 2701 telecommunication
controller supporting T1 (1.5mbits/sec) links, however move to
SNA/VTAM in mid-70s and resulting issues seemed to cap controllers at
56kbit/sec. We were also working with NSF director and was suppose to
get $20M to interconnect the NSf supercomputer centers, then congress
cuts the budget, some other things happen and finally an RFP is
released (in part based on what we already had running). From
28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.
Somebody had been collecting (communication group) email with
misinformation about supporting NSFNET ... copy in this archived post
(heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
Note 1972, Learson tries (and fails) to block bureaucrats, careerists,
and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
20yrs later IBM has one of the largest losses in the history of US
companies (and was being reorg'ed into the 13 "baby blues" in
preparation for breaking up the company).
recently posted related comment/replies
https://www.garlic.com/~lynn/2024f.html#118 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#119 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
on the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
4th Generation Programming Language
From: Lynn Wheeler <lynn@garlic.com>
Subject: 4th Generation Programming Language
Date: 20 Nov, 2024
Blog: Facebook
4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language
even before SQL (& RDBMS) which was originally done on VM370/CMS aka
System/R at IBM SJR,
https://en.wikipedia.org/wiki/IBM_System_R
later tech transfer to Endicott for SQL/DS and nearly decade after
start of System/R, tech transfer to STL for DB2, there were other "4th
Generation Languages", one of the original 4th generation languages,
Mathematica made available through NCSS (a '60s online commercial
cp67/cms spin-off of the IBM cambridge science center; cp67/cms
virtual machine precursor to vm370/cms)
NOMAD
http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a
report that would have taken many hundreds of lines of Cobol to
produce. The product grew in capability and in revenue, both to NCSS
and to Mathematica, who enjoyed increasing royalty payments from the
sizable customer base. FOCUS from Information Builders, Inc (IBI),
did even better, with revenue approaching a reported $150M per
year. RAMIS moved among several owners, ending at Computer Associates
in 1990, and has had little limelight since. NOMAD's owners, Thomson,
continue to market the language from Aonix, Inc. While the three
continue to deliver 10-to-1 coding improvements over the 3GL
alternatives of Fortran, Cobol, or PL/1, the movements to object
orientation and outsourcing have stagnated acceptance.
... snip ...
other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica makes Ramis available to TYMSHARE for their
VM370-based commercial online service, NCSS does their own version
https://en.wikipedia.org/wiki/Nomad_software
and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to
Mathematica's RAMIS, the first Fourth-generation programming language
(4GL). Key developers/programmers of RAMIS, some stayed with
Mathematica others left to form the company that became Information
Builders, known for its FOCUS product
... snip ...
more spin-off of IBM CSC
https://www.computerhistory.org/collections/catalog/102658182
also some mention "first financial language" done in 60s at IDC
("Interactive Data Corporation", another cp67/cms '60s online
commercial spinoff from the IBM cambridge sc ience center)
https://archive.computerhistory.org/resources/access/text/2015/09/102702884-05-01-acc.pdf
https://archive.computerhistory.org/resources/access/text/2015/10/102702891-05-01-acc.pdf
as an aside, a decade later, IDC person involved w/FFL, joins with
another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc
other trivia, REX (before renamed REXX and released to customers) was
also originally done on VM370/CMS
... and TYMSHARE offered commercial online VM370/CMS services
https://en.wikipedia.org/wiki/Tymshare
also started offering their VM370/CMS-based online computer
conferencing "free" to SHARE
https://www.share.org/
starting in Aug1976 as VMSHARE
http://vm.marist.edu/~vmshare
Cambridge Scientific Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
CP67
https://en.wikipedia.org/wiki/CP-67
Cambridge Monitor System, renamed Conversational Monitor System for
VM370
https://en.wikipedia.org/wiki/Conversational_Monitor_System
IBM Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
commercial online (virtual machine) services posts
https://www.garlic.com/~lynn/submain.html#online
some past 4th gen, RAMIS, NOMAD, NCSS, etc posts
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2023g.html#64 Mainframe Cobol, 3rd&4th Generation Languages
https://www.garlic.com/~lynn/2023.html#13 NCSS and Dun & Bradstreet
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021c.html#29 System/R, QBE, IMS, EAGLE, IDEA, DB2
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018e.html#45 DEC introduces PDP-6 [was Re: IBM introduces System/360]
https://www.garlic.com/~lynn/2018d.html#3 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2016e.html#107 some computer and online history
https://www.garlic.com/~lynn/2015h.html#27 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2014i.html#32 Speed of computers--wave equation for the copper atom? (curiosity)
https://www.garlic.com/~lynn/2014e.html#34 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2013m.html#62 Google F1 was: Re: MongoDB
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#57 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2013c.html#56 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#30 General Mills computer
https://www.garlic.com/~lynn/2012e.html#84 Time to competency for new software language?
https://www.garlic.com/~lynn/2012d.html#51 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#1 Deja Cloud?
https://www.garlic.com/~lynn/2011m.html#69 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2010q.html#63 VMSHARE Archives
https://www.garlic.com/~lynn/2010e.html#55 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#54 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
https://www.garlic.com/~lynn/2006k.html#37 PDP-1
https://www.garlic.com/~lynn/2006k.html#35 PDP-1
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
https://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
--
virtualization experience starting Jan1968, online at home since Mar1970
Signetics 25120 WOM
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Signetics 25120 WOM
Newsgroups: alt.folklore.computers, comp.arch
Date: Wed, 20 Nov 2024 17:02:24 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Back in the 1970s, Signetics put out a joke data sheet for a "Write-Only
Memory" chip. Basically any data you sent to it would be simply thrown
away, and attempts to read from the chip would never return anything. They
were surprised to get a few serious queries from prospective customers
wanting to make use of this component.
i have some vague memory from the period using it (or something similar)
for optimal compression
--
virtualization experience starting Jan1968, online at home since Mar1970
4th Generation Programming Language
From: Lynn Wheeler <lynn@garlic.com>
Subject: 4th Generation Programming Language
Date: 20 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#9 4th Generation Programming Language
besides NOMAD refs in post, there is lots more in the computerhistory
refs ... for some reason the way FACEBOOK swizzles the URL, the
trailing /102658182 gets lost
https://www.computerhistory.org/collections/catalog/102658182
truncated to just the catalog url ... with the /102658182
Five of the National CSS principals participated in a recorded
telephone conference call with a moderator addressing the history of
the company's use of RAMIS and development of NOMAD. The licensing of
RAMIS from Mathematica and the reasons for building their own product
are discussed as well as the marketing of RAMIS for developing
applications and then the ongoing revenue from using these
applications. The development of NOMAD is discussed in detail along
with its initial introduction into the marketplace as a new offering
not as a migration from RAMIS. The later history of NOMAD is reviewed,
including the failure to build a successor product and the inability
to construct a viable PC version of NOMAD.
... snip ...
then points to
https://archive.computerhistory.org/resources/access/text/2012/04/102658182-05-01-acc.pdf
NCSS trivia: .... I was undergraduate and univ had hired me
responsible for os/360 running on 360/67. CSC came out to install CP67
Jan1968 (3rd after CSC itself and MIT Lincoln Labs) and I mostly got
to play with it during my (48hr) weekend dedicated time, first couple
months rewriting lots of CP67 to improve running OS/360 in virtual
machine. OS/360 test stream ran 322secs stand-alone and initially
856secs in virtual machine (534secs CP67 CPU). After a couple
months I got CP67 CPU down to 113secs (from 534) ... and was asked to
attend CP67 "official" announcement at spring '68 SHARE meeting in
Houston. CSC was then having a one week class in June and I arrive
Sunday night and am asked to teach the CP67 class, the people that
were suppose to teach it had given notice that Friday, leaving for
NCSS.
IBM Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
4th Generation Programming Language
From: Lynn Wheeler <lynn@garlic.com>
Subject: 4th Generation Programming Language
Date: 20 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#9 4th Generation Programming Language
https://www.garlic.com/~lynn/2024g.html#11 4th Generation Programming Language
other CP/67 trivia ... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/m
https://en.wikipedia.org/wiki/CP/M
before developing CP/m, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
Opel and Gates' mother
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates,
CEO of then-small software firm Microsoft, to discuss the possibility
of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel
set up the meeting at the request of Gates' mother, Mary Maxwell
Gates. The two had both served on the National United Way's executive
committee.
... snip ...
more CP67 trivia, one of my hobbies afer joining the science center
was enhanced operating systems for internal systems (and the internal
branch office online sales&market support HONE was long time customer,
eventually HONE clones sprouting up all over the world and customer
orders required to be run through HONE APL apps before submitting).
In the morph of CP/67->VM/370, lots of features were simplified and/or
dropped. I then started migrating stuff to a VM370R2-based system. I
had an automated benchmarking system (I originally developed "autolog"
command for benchmarking scripts but it then got adopted for lots of
automated operational purposes) and started with that to get a
baseline for VM370R2 before moving lots of CP67 to VM370.
Unfortunately, VM370 wasn't able to finish the benchmarking scripts
(w/o system crashes) and so I had to add a bunch of CP67 kernel
serialization and integrity stuff in order to just complete set of
benchmarks (for baseline performance numbers).
Then for internal production CSC/VM, I enhanced VM370R2 with a bunch
of other CP67 work, including kernel reorg needed for multiprocessing
operation (but not the multiprocessor support itself). Then for a
VM370R3-based CSC/VM I added multiprocessor support originally for the
consolidated US HONE datacenter. All the US HONE systems had been
consolidated in Palo Alto (across the back parking lot from the Palo
Alto Scientific Center, trivia: when FACEBOOK 1st moved into silicon
valley, it was into a new bldg built next to the former US HONE
datacenter) ... upgraded with single system image, shared DASD,
load-balancing and fall-over across all the systems. Then with
VM370R3-based CSC/VM were able to add a 2nd processor to each system.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP/67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
--
virtualization experience starting Jan1968, online at home since Mar1970
ARPANET And Science Center Network
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET And Science Center Network
Date: 21 Nov, 2024
Blog: Facebook
Co-worker at cambridge science center was responsible for the science
center CP67-based wide-area network which morphs into the IBM internal
network (larger than arpanet/internet from the beginning until
sometime mid/late 80s, about the time communication group forced the
internal network to convert to SNA/VTAM) and technology also used for
the corporate sponsored Univ BITNET ("EARN" in Europe) ... account
from one of the inventors of GML (at CSC, 1969)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Edson passed Aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
Bitnet (& EARN) ref:
https://en.wikipedia.org/wiki/BITNET
In 1977, Ed and I had transfered out to San Jose Reseach ... and SJR
installed the first IBM gateway to non-IBM (CSNET) in Oct1982 (that
had gateways to arpanet and other networks, just before the arpanet
conversion to TCP/IP).
https://en.wikipedia.org/wiki/CSNET
1jan1983 arpanet big conversion from host protocol and IMPs to
internetworking protocol (TCP/IP), there were about 100 IMPs and 255
hosts while the internal network was rapidly approaching 1000 ... old
archived post with list of corporate world-wide posts that added one
or more host during 1983:
https://www.garlic.com/~lynn/2006k.html#8
SJMerc article about Edson and "IBM'S MISSED OPPORTUNITY WITH THE
INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website ... blocked from converting internal network to
tcp/ip
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Also, I get HSDT project in the early 80s, T1 and faster computer
links (both terrestrial and satellite) bringing conflicts with the
communication group. Note in the 60s, IBM had 2701 telecommunication
controller that supported T1, but then moves to SNA/VTAM in the 70s,
SNA/VTAM issues apparently cap controller links at 56kbits/sec. Was
working with the NSF director and was suppose to get $20M to
interconnect the NSF Supercomputing Centers, then congress cuts the
budget, some other things happen and then a RFP is released (in part
based on what we already had running). From 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.
Somebody had been collecting (communication group) email with
misinformation about supporting NSFNET (also about the time internal
network was forced to convert to SNA/VTAM) ... copy in this archived
post (heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
Note, late 80s a senior disk engineer gets talk scheduled at internal,
world-wide, annual communication group conference supposedly about
3174 performance, but opens the talk with statements that the
communication group was going to be responsible for demise of the disk
division, the disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly veto'ed by the communication group (with
their corporate responsibility for everything that cross datacenter
walls, fiercely fighting off client/server and distributed computing
trying to preserve their dumb terminal paradigm). It wasn't just
stranglehold on disks ... and a couple years later IBM has one of the
largest losses in the history of US companies and was being
reorganized into the 13 "baby blues" (somewhat take-off on AT&T "baby
bells" breakup a decade earlier) in preparation for breaking up the
company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
communication group stanglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
two decades earlier, Learson trying (&failed) to block the
bureaucrats, careerists and MBAs from destroying Watsons'
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
--
virtualization experience starting Jan1968, online at home since Mar1970
ARPANET And Science Center Network
From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET And Science Center Network
Date: 21 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#13 ARPANET And Science Center Network
other trivia: mid-80s, communication group was fighting the release of
mainframe tcp/ip support, when that got reversed, they changed their
strategy and (with their corporate strategic responsibility for
everything that crossed datacenter walls) they said it had to be
released through them. What shipped got 44kbytes/sec aggregate
throughput using nearly whole 3090 processor. I then do "fixes" to
support RFC1044 and in some tuning tests at Cray Research between Cray
and 4341, got sustained 4341 channel throughput using only modest
amount of 4341 processor (something like 500 times improvement in
bytes moved per instruction executed).
later in the early 90s, they hired silicon valley contractor to do
implementation of tcp/ip support directly in VTAM, what he initially
demo'ed had much higher throughput than LU6.2. He was then told that
everybody knows that a proper TCP/IP implementation has much lower
throughput than LU6.2 and they would only be paying for a proper
implementation (trivia: he passed a couple months ago).
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
ARPANET And Science Center Network
From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET And Science Center Network
Date: 21 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#13 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024g.html#14 ARPANET And Science Center Network
The Internet's 100 Oldest Dot-Com Domains
https://www.pcworld.com/article/532545/oldest_domains.html
my old post in internet group
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
and comment about IBM getting class-a 9-net (after interop88)
https://www.garlic.com/~lynn/2024b.html#35 Internet
with email
https://www.garlic.com/~lynn/2024b.html#email881216
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
--
virtualization experience starting Jan1968, online at home since Mar1970
ARPANET And Science Center Network
From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET And Science Center Network
Date: 22 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#13 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024g.html#14 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024g.html#15 ARPANET And Science Center Network
GML invented at IBM cambridge science center in 1969, a decade later
morphs into ISO standaard SGML and after another decade morphs into
HTML at CERN. First "web" server in the US was Stanford SLAC (CERN
sister institution) VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
NSF preliminary announcement mentions supercomputer software, and
ncsa.illinois.edu does mosaic
http://www.ncsa.illinois.edu/enabling/mosaic
then some of the people come out to silicon valley to do mosaic
startup, ncsa complains about use of "mosaic" and they changed the
name to "netscape"
last product we did at IBM was HA/6000, I change the name to HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs and commercial cluster scale-up with RDBMS (Oracle, Sybase,
Informix, Ingres). Early Jan1992, we have meeting with Oracle CEO,
AWD/Hester tells Ellison that we would have 16-system clusters mid92
and 128-system clusters ye92. Late Jan1992, cluster scale-up is
transferred for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we are told that we can't work on
anything with more than four processors (we leave IBM a few months
later).
Not long afterwards was brought in as consultant to small
client/server startup, two former Oracle people (that were in the
Ellison/Hester meeting) are there responsible for something called the
"commerce server" and want to do financial transactions on the server,
the startup had also invented this technology call "SSL" they want to
use, result is now frequently called "electronic commerce". I had
responsibility for everything between the webservers and the financial
payment networks. Afterwards I put together a talk on "Why Internet
Wasn't Business Critical Dataprocessing" (and the Internet Standards
RFC editor, Postel sponsors my talk at ISI/USC) based on the work I
had to do, multi-level security layers, multi-redundant operation,
diagnostics, processes, procedures, and documentation.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some posts mentioning ha/cmp, mosaic, netscape, business critical
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019e.html#87 5 milestones that created the internet, 50 years after the first network message
--
virtualization experience starting Jan1968, online at home since Mar1970
60s Computers
From: Lynn Wheeler <lynn@garlic.com>
Subject: 60s Computers
Date: 22 Nov, 2024
Blog: Facebook
Amdahl wins the battle to make ACS, 360-compatible, when it is
canceled, Amdahl leaves IBM. Some folklore ACS/360 canceled because it
might advance state of the art too fast and IBM would loose control of
the market.
https://people.computing.clemson.edu/~mark/acs_end.html
As the quote above indicates, the ACS-1 design was very much an
out-of-the-ordinary design for IBM in the latter part of the 1960s. In
his book, Data Processing Technology and Economics, Montgomery
Phister, Jr., reports that as of 1968:
Of the 26,000 IBM computer systems in use, 16,000 were S/360 models
(that is, over 60%). [Fig. 1.311.2] Of the general-purpose systems
having the largest fraction of total installed value, the IBM S/360
Model 30 was ranked first with 12% (rising to 17% in 1969). The S/360
Model 40 was ranked second with 11% (rising to almost 15% in
1970). [Figs. 2.10.4 and 2.10.5] Of the number of operations per
second in use, the IBM S/360 Model 65 ranked first with 23%. The
Univac 1108 ranked second with slightly over 14%, and the CDC 6600
ranked third with 10%. [Figs. 2.10.6 and 2.10.7]
... snip ...
I took a 2 credit-hr intro to fortran/computers and at end of semester
was univ hired to reimplement 1401 MPIO in assembler for 360/30. Univ
was getting 360/67 for tss/360 to replace 709/1401 and was getting
360/30 replacing 1401 temporarily pending 360/67. Univ. shutdown
datacenter on weekends and I would have the whole place dedicated
(although 48hrs w/o sleep made monday classes hard). They gave me a
bunch of hardware and software documents and I got to design and
implement my own monitor, device drivers, interrupt handlers, error
recovery, storage management, etc. and in a few weeks, I had 2000 card
program. Within a year of taking intro class, the 360/67 and I was
hired fulltime responsible for os/360, running 360/67 as 360/65
(tss/360 never came to production). 709 ran student fortran in under a
second. Initially os/360 ran student fortran over a minute. I install
HASP and cuts the time in half. I then redo STAGE2 SYSGEN to carefully
place datasets and PDS members to optimize disk arm seek and
multi-track search, cutting another 2/3rds to 12.9secs. Time never got
better than 709 until I install Univ. of Waterloo WATFOR.
CSC then comes out to install CP67 (3rd after CSC itself and MIT
Lincoln Labs) and I mostly get to play with it during my weekend
dedicated window. first couple months rewriting lots of CP67 to
improve running OS/360 in virtual machine. OS/360 test stream ran
322secs stand-alone and initially 856secs in virtual machine (534secs
CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from
534) ... and was asked to attend CP67 "official" announcement at
spring '68 SHARE meeting in Houston. CSC was then having a one week
class in June and I arrive Sunday night and am asked to teach the CP67
class, the people that were suppose to teach it had given notice that
Friday, leaving for NCSS (one of the 60s virtual machine online
commercial spin-offs of the science center).
Before I graduate, I'm hired fulltime into a small group in Boeing CFO
office to help with the formation of Boeing Computer Services
(consolidate all dataprocessing into an independent business unit). I
think Renton datacenter possibly largest in world, couple hundred
million in IBM gear, 360/65s arriving faster than they could be
installed, boxes constantly staged in hallways around machine room
(some joke that Boeing was getting 360/65s like other companies got
keypunches).
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning 709/1401, MPIO, Boeing, Renton, ACS/360:
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
--
virtualization experience starting Jan1968, online at home since Mar1970
PS2 Microchannel
From: Lynn Wheeler <lynn@garlic.com>
Subject: PS2 Microchannel
Date: 22 Nov, 2024
Blog: Facebook
There were tight grips on microchannel and communication group had
performance kneecaped IBM microchannel cards (part of fierce battle
fighting off client/server and distributed computing trying to
preserve the dumb terminal paradigm). Note AWD IBU (advance
workstation division independent business unit) had done their own
4mbit token-ring card for the PC/RT (16bit at bus) ... but for the
microchannel RS/6000 they were told they couldn't do their ownl cards
but had to use standard PS2 microchannel cards. Turns out that the
$800 16mbit token-ring microchannel PS2 card had lower throughput than
the PC/RT 4mbit token-ring card (and standard $69 10mbit ethernet
cards had much higher throughput than both)
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
communication group stanglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
60s Computers
From: Lynn Wheeler <lynn@garlic.com>
Subject: 60s Computers
Date: 22 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
Early 70s, IBM had the future system project, completely different
than 370 and was going to completely replace it ... and internal
politics were killing of 370 projects ... claim is that the lack of
new (IBM) 370 during the period is what gave the clone system makers
(including Amdahl) their market foothold (and IBM marketing had to
fall back on enormous amount of FUD). some more FS detail
http://www.jfsowa.com/computer/memo125.htm
After joining IBM I continued to attend user group meetings (SHARE,
others) and drop by customers. The director of one of largest
commercial "true blue" datacenters liked me to stop by and talk
technology. At some point, the IBM branch manager horribly offended
the customer and in retribution they ordered an Amdahl (lonely Amdahl
in vast sea of blue). Up until then Amdahl had been selling into
univ/technical/scientific markets, but this would be the first "true
blue" commercial install. I was then asked to go spend onsite for
6-12months at the customer. I talk it over with the customer and then
decline IBM's offer. I'm them told that the branch manager is good
sailing buddy of IBM CEO and if I didn't do this, I could forget
having a career, promotions, and raises.
After transferring to San Jose Research in late 70s, would attend the
monthly BAYBUNCH meetings hosted by SLAC. Earlier Endicott had roped
me into helping with the VM370 ECPS microcode assist ... and in early
80s got permission to give presentations on how it was done at user
group meetings (including BAYBUNCH). After (BAYBUNCH) meetings, we
would usually adjourn to local watering holes and Amdahl people
briefed me they were developing HYPERVISOR ("multiple domain") and
grilled me about more ECPS details.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning future system, amdahl, ecps, baybunch, hypervisor
https://www.garlic.com/~lynn/2024f.html#30 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2015d.html#14 3033 & 3081 question
https://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
--
virtualization experience starting Jan1968, online at home since Mar1970
The New Internet Thing
From: Lynn Wheeler <lynn@garlic.com>
Subject: The New Internet Thing
Date: 22 Nov, 2024
Blog: Facebook
The New Internet Thing
https://albertcory50.substack.com/p/this-new-internet-thing-chapter-27
Notes on Chaper 27
https://albertcory50.substack.com/p/notes-on-chapter-27
Grant Avery is now working for Taligent, a joint effort between Apple
(before Jobs returned) and IBM, which everyone involved would rather
forget about. Not on my watch!
IBM still thought they could finally, really, really beat Microsoft at
the PC game. OS/2 hadn't done it, so now they were doing
Object-Oriented with Apple, and it was going to be the thing that
everyone would get behind. Grant's job is to evangelize for it with
other big suckers companies, HP being the one he's pitching in this
chapter.
... snip ...
We were doing cluster scale-up for HA/CMP ... working with national
labs on technical/scientific scale-up and with RDBMS vendors on
commercial scale-up. Then JAN1992 meeting in Ellison conference room
with several Oracle people (including CEO Ellison) on cluster
scale-up. within a few weeks, cluster scale-up is transferred,
announced as IBM supercomputer (for scientific/technical *ONLY*), and
we were told we couldn't work on anything with more than four
processors. A few months later, we leave IBM.
Later two of the Oracle people that were in the Ellison HA/CMP
meeting, have left and are at a small client/server startup
responsible for something called "commerce server". We are brought in
as consultants because they want to do payment transactions, the
startup had also invented this technology called "SSL" they want to
use, the result is now frequently called "electronic commerce". I had
responsibility for everything from the webserver to the payment
networks ... but could only recommend on the serve r/client
side. About the backend side, I would pontificate that it took 4-10
times the effort to take a well designed, will implemented and tested
application to turn it into industrial strength service. Postel
sponsored my talk on the subject at ISI/USC.
Object oriented operating systems for a time were all the rage in the
valley ... apple was doing PINK and SUN was doing Spring. Taligent was
then spun off and lot of the object technology moved there ... but
heavily focused on GUI apps.
Spring '95, i did a one-week JAD with dozen or so taligent people on
use of taligent for business critical applications. there were
extensive classes/framework for GUI & client/server support, but
various critical pieces were missing. I was asked to do a week JAD
with Taligent about what it would take to provide support for
implementing industrial strength services (rather than
applications). Resulting estimate was 30% hit to their existing
"frameworks" and two new frameworks specifically for industrial
strength services. Taligent was also going thru evolution (outside of
the personal computing, GUI paradigm) ... a sample business
application required 3500 classes in taligent and only 700 classes in
a more mature object product targeted for the business environment.
old comment from taligent employee: The business model for all this
was never completely clear, and in the summer of 1995, upper
management quit en masse
I think that shortly after taligent vacated their building ... sun
java group moved in.
About last gasp for Spring was when we were asked in to consider
taking on turning Spring out as commercial product (we declined)
... and then Spring was shutdown and people moved over to Java.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning Taligent, object, JAD
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2017b.html#46 The ICL 2900
https://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2009m.html#26 comp.arch has made itself a sitting duck for spam
https://www.garlic.com/~lynn/2008b.html#22 folklore indeed
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm24.htm#20 On Leadership - tech teams and the RTFM factor
--
virtualization experience starting Jan1968, online at home since Mar1970
The New Internet Thing
From: Lynn Wheeler <lynn@garlic.com>
Subject: The New Internet Thing
Date: 22 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#20 The New Internet Thing
was consultant ... but ran into lots of sun people before on various
projects. one was the VP of the group that the HA/SUN product reported
to. Early HA/SUN financial customer ran into a glitch resulting in
loss of customer records and I was brought in as part of the after
action review. The SUN VP opened with a pitch about HA/SUN .... that
sounded just like a HA/CMP marketing talk I had created nearly a
decade before.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
late 80s, IBM branch offices asked me if I could help SLAC with what
becomes SCI standard and LLNL with what becomes FCS standard. 1995
(after having left IBM in 1992), I was spending some time at chip
company and (former SUN) SPARC10 engineer was there, working on high
efficiency SCI chip and looking at being able to scale-up to 10,000
machine configuration running Spring ... and got me Spring
documentation and (I guess) pushed SUN about making me an offer to
turn Spring into a commercial product.
https://web.archive.org/web/20030404182953/http://java.sun.com/people/kgh/spring/
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
some SLAC SCI and SPRING posts
https://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013b.html#21 New HD
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2012f.html#94 Time to competency for new software language?
https://www.garlic.com/~lynn/2010f.html#47 Nonlinear systems and nonlocal supercomputing
https://www.garlic.com/~lynn/2010.html#44 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2008p.html#33 Making tea
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM SE Asia
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SE Asia
Date: 23 Nov, 2024
Blog: Facebook
I was introduced to Boyd in early 80s and use to sponsor his briefings
at IBM. One of his stories was about being vocal that the electronics
across the trail wouldn't work and then (possibly as punishment) is
put in command of "Spook Base" (about the same time I'm at
Boeing). Some refs:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White
Boyd biographies claim "spook base" was a $2.5B "wind fall" for IBM
(60s dollars). Other recent Boyd refs:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Before I graduated, I had been hired fulltime into a small group in
the Boeing CFO office to help with the formation of Boeing Computer
Services, consolidated all dataprocessing into an independent business
unit. I think Renton was the largest datacenter in the world (couple
hundred million in 360 stuff, but only 1/10th "spook base")
... 360/65s arriving faster than they could be installed, boxes
constantly staged in hallways around the machine room (joke that
Boeing was getting 360/65s like other companies got keypunches).
Late 80s, commandant of the Marine Corp leverages Boyd for a make over
of the corp (at a time when IBM was desperately in need of make over)
and we continued to have Boyd conferences at Marine Corp Univ. through
last decade.
from Jeppeson webpage:
Access to the environmentally controlled building was afforded via the
main security lobby that also doubled as an airlock entrance and
changing-room, where twelve inch-square pidgeon-hole bins stored
individually name-labeled white KEDS sneakers for all TFA
personnel. As with any comparable data processing facility of that
era, positive pressurization was necessary to prevent contamination
and corrosion of sensitive electro-mechanical data processing
equipment. Reel-to-reel tape drives, removable hard-disk drives,
storage vaults, punch-card readers, and inumerable relays in
1960's-era computers made for high-maintainence systems. Paper dust
and chaff from fan-fold printers and the teletypes in the
communications vault produced a lot of contamination. The super-fine
red clay dust and humidity of northeast Thailand made it even more
important to maintain a well-controlled and clean working environment.
Maintenance of air-conditioning filters and chiller pumps was always a
high-priority for the facility Central Plant, but because of the
24-hour nature of operations, some important systems were run to
failure rather than taken off-line to meet scheduled preventative
maintenance requirements. For security reasons, only off-duty TFA
personnel of rank E-5 and above were allowed to perform the
housekeeping in the facility, where they constantly mopped floors and
cleaned the consoles and work areas. Contract civilian IBM computer
maintenance staff were constantly accessing the computer sub-floor
area for equipment maintenance or cable routing, with the numerous
systems upgrades, and the underfloor plenum areas remained much
cleaner than the average data processing facility. Poisonous snakes
still found a way in, causing some excitement, and staff were
occasionally reprimanded for shooting rubber bands at the flies during
the moments of boredom that is every soldier's fate. Consuming
beverages, food or smoking was not allowed on the computer floors, but
only in the break area outside. Staff seldom left the compound for
lunch. Most either ate C-rations, boxed lunches assembled and
delivered from the base chow hall, or sandwiches and sodas purchased
from a small snack bar installed in later years.
... snip ...
Boyd would claim that it was the largest air conditioned bldg in that
part of the world.
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Move From Leased To Sales
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Move From Leased To Sales
Date: 24 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024f.html#81 The Rise and Fall of the 'IBM Way'
https://www.garlic.com/~lynn/2024f.html#102 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024f.html#118 IBM Downturn and Downfall
leased to sales 1st half 70s (predated Gerstner by 20yrs). I've
commented that leased charges were based on system meter that ran
whenever cpu(s) and/or channel(s) were busy. In the 60s lot of work
was done with CP67 (precursor to vm370) for 7x24, online operation
... dark room no operator and system meter would stop whenever there
was no activity (but instant-on whenever characters were arriving ...
analogous to large cloud megadatacenters today focused on no
electrical use when idle, but instant-on when needed). Note
cpu/channels had to be idle for 400ms before system meter stopped;
trivia ... long after switch-over from leased to sales, MVS still had
timer event that woke up every 400ms ... which would make sure system
meter never stopped.
1972, Learson was trying (& failed) to block the bureaucrats,
careerists, and MBAs from destroying the Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
20yrs later, IBM had one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take-off on
AT&T "baby bells" in AT&T breakup a decade earlier) in preparation for
breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
First half of 70s there was the Future System project, completely
different than 370 and was going to completely replace 370, internal
politics during FS was killing off 370 projects and claim is that lack
of new 370 products during FS was what gave the clone 370 system
makers their market foothold (and IBM marketing had to fall back on
lots of FUD). It might be said that switch from lease to sales was
motivated by trying to maintain/boost revenue during this period.
When FS implodes, there is mad rush to get stuff back into 370 product
pipelines, including kicking off quick&dirty 3033&3081 in
parallel. More FS:
http://www.jfsowa.com/computer/memo125.htm
also "Future System" (F/S, FS) project
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive
... snip ...
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning leased charges based on system meter
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#61 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024c.html#116 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#115 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#23 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2019d.html#19 Moonshot - IBM 360 ?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2017i.html#65 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016b.html#86 Cloud Computing
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015c.html#103 auto-reboot
https://www.garlic.com/~lynn/2014m.html#113 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer
https://www.garlic.com/~lynn/2014h.html#19 weird trivia
https://www.garlic.com/~lynn/2014g.html#85 Costs of core
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014e.html#4 Can the mainframe remain relevant in the cloud and mobile era?
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World
--
virtualization experience starting Jan1968, online at home since Mar1970
2001/Space Odyssey
From: Lynn Wheeler <lynn@garlic.com>
Subject: 2001/Space Odyssey
Date: 24 Nov, 2024
Blog: Facebook
HAL ,,, each letter one preceeding IBM
IBM System 9000 (1982) M68k Laboratory Computer
https://en.wikipedia.org/wiki/IBM_System_9000
IBM ES/9000 (1990) ESA/390 mainframe
https://en.wikipedia.org/wiki/IBM_System/390#ES/9000
Amdahl won battle to make ACS, 360 compatible. Then when ACS/360 is
canceled, Amdahl leaves IBM and forms his own company. Following also
mentions some ACS/360 features that show up in ES/9000 in the 90s
(recent IBM webserver changes seem to obliterated lots of mainframe
history)
https://people.computing.clemson.edu/~mark/acs_end.html
... by former IBMer:
HAL Computer
https://en.wikipedia.org/wiki/HAL_Computer_Systems
HAL SPARC64
https://en.wikipedia.org/wiki/HAL_SPARC64
--
virtualization experience starting Jan1968, online at home since Mar1970
Taligent
From: Lynn Wheeler <lynn@garlic.com>
Subject: Taligent
Date: 25 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#20 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#21 The New Internet Thing
The New Internet Thing
https://albertcory50.substack.com/p/this-new-internet-thing-chapter-27
Notes on Chaper 27
https://albertcory50.substack.com/p/notes-on-chapter-27
Grant Avery is now working for Taligent, a joint effort between Apple
(before Jobs returned) and IBM, which everyone involved would rather
forget about. Not on my watch!
IBM still thought they could finally, really, really beat Microsoft at
the PC game. OS/2 hadn't done it, so now they were doing
Object-Oriented with Apple, and it was going to be the thing that
everyone would get behind. Grant's job is to evangelize for it with
other big suckers companies, HP being the one he's pitching in this
chapter.
... snip ...
Taligent
https://en.wikipedia.org/wiki/Taligent
Taligent Inc. (a portmanteau of "talent" and "intelligent")[3][4] was
an American software company. Based on the Pink object-oriented
operating system conceived by Apple in 1988, Taligent Inc. was
incorporated as an Apple/IBM partnership in 1992, and was dissolved
into IBM in 1998.
... snip ...
We were doing cluster scale-up for HA/CMP ... working with national
labs on technical/scientific scale-up and with RDBMS vendors on
commercial scale-up. Early JAN1992 meeting with Oracle CEO and several
Oracle people on cluster scale-up, AWD/Hester tells Ellison that we
would have 16-system clusters by mid92 and 128-system clusters by
ye92. After couple weeks by end of JAN1992, cluster scale-up is
transferred, announced as IBM supercomputer (for scientific/technical
*ONLY*), and we were told we couldn't work on anything with more than
four processors. A few months later, we leave IBM.
Later two of the Oracle people that were in the Ellison HA/CMP
meeting, have left and are at a small client/server startup
responsible for something called "commerce server". We are brought in
as consultants because they want to do payment transactions, the
startup had also invented this technology called "SSL" they want to
use, the result is now frequently called "electronic commerce". I had
responsibility for everything from the webserver to the payment
networks ... but could only recommend on the server/client side. About
the backend side, I would pontificate that it took 4-10 times the
effort to take a well designed, will implemented and tested
application to turn it into industrial strength service. Postel
(Internet/RFC standard editor) sponsored my talk on the subject at
ISI/USC.
Object oriented operating systems for a time were all the rage in the
valley ... Apple was doing PINK and SUN was doing Spring. Taligent was
then spun off and lot of the object technology moved there ... but
heavily focused on GUI apps.
Spring '95, i did a one-week JAD with dozen or so Taligent people on
use of Taligent for business critical applications. there were
extensive classes/framework for GUI & client/server support, but
various critical pieces were missing. I was asked to do a week JAD
with Taligent about what it would take to provide support for
implementing industrial strength services (rather than
applications). Resulting estimate was 30% hit to their existing
"frameworks" and two new frameworks specifically for industrial
strength services. Taligent was also going thru evolution (outside of
the personal computing, GUI paradigm) ... a sample business
application required 3500 classes in Taligent and only 700 classes in
a more mature object product targeted for the business environment.
old comment from Taligent employee: The business model for all this
was never completely clear, and in the summer of 1995, upper
management quit en masse
I think that shortly after Taligent vacated their building ... Sun
Java group moved in (The General Manager of business unit that Java
reported to, was somebody I did some work with 15yrs earlier at IBM
Los Gatos).
late 80s, IBM branch offices had asked me if I could help SLAC with
what becomes SCI standard and LLNL with what becomes FCS standard.
1995 (after having left IBM in 1992), I was spending some time at chip
company and (former SUN) SPARC10 engineer was there, working on high
efficiency SCI chip and looking at being able to scale-up to 10,000
machine distributed configuration running Spring ... and got me Spring
documentation and (I guess) pushed SUN about making me an offer to
turn Spring into a commercial product.
About last gasp for Spring was when we were asked in to consider
taking on turning Spring out as commercial product (we declined)
... and then Spring was shutdown and people moved over to Java
https://web.archive.org/web/20030404182953/http://java.sun.com/people/kgh/spring/
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
other posts mentioning Taligent JAD
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2017b.html#46 The ICL 2900
https://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2010g.html#59 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#15 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2009m.html#26 comp.arch has made itself a sitting duck for spam
https://www.garlic.com/~lynn/2008b.html#22 folklore indeed
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm24.htm#20 On Leadership - tech teams and the RTFM factor
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Move From Leased To Sales
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Move From Leased To Sales
Date: 26 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#23 IBM Move From Leased To Sales
note: Amdahl won battle to make ACS "360" compatible, then when
ACS/360 is killed, he leaves IBM (prior to "FS") to form his own clone
370 company.
https://people.computing.clemson.edu/~mark/acs_end.html
FS implosion and 3033 started out 168 logic remapped to 20% faster
chips and 3081 is warmed over FS technology (see memo125 ref, and 3081
was going to be multiprocessor only). 3081D (two processor aggregate)
was slower than Amdahl single processor. They then double processor
cache size for 3081K ... aggregate about same as Amdahl single
processor ... although IBM pubs had MVS multiprocessor throughput only
1.2-1.5 times a single processor ... aka MVS on single processor
Amdhal much higher througthput than 2CPU 3081K, even tho aggregate CPU
cycles about the same, requiring lots more marketing FUD.
Also customers weren't converting/migrating to MVS/XA as planned and
so tended to run 3081s in 370-mode. Amdahl was having more success
because it had come out with HYPERVISOR microcode ("multiple domain")
and could run MVS and MVS/XA concurrently as part of migration (in the
wake of FS implosion, the head of POK had convinced corporate to kill
IBM's virtual machine VM370, shutdown the development group and
transfer all the people to POK for MVS/XA; eventually Endicott manages
to save the VM370 product mission for the mid-range, but had to
recreate a development group from scratch) ... it wasn't until almost
a decade later that IBM responded with LPAR and PR/SM for 3090.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
posts mentioning 3081, mvs/xa, amdahl hypervisor, lpar and pr/sm
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018.html#97 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Unbundling, Software Source and Priced
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Unbundling, Software Source and Priced
Date: 26 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
Last product did at IBM started out HA/6000 originally for the NYTimes
to move their newspaper system from VAXCluster to RS/6000. I rename it
HA/CMP when start doing technical/scientific cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors
(Oracle, Sybase, Informix, Ingres that had VAXCluster in the same
source base with Unix). Early Jan1992, had meeting with Oracle CEO
where AWD/Hester tells Ellison that we would have 16-system clusters
by mid92 and 128-system clusters by ye92. I then update FSD (federal
system division) about the HA/CMP work with national lab and they tell
IBM Kingston Supercomputer group that they were going with HA/CMP for
gov. accounts. Late JAN1992 HA/CMP scale-up is transferred to Kingston
for announce as IBM Supercomputer (for technical/scientific *ONLY*)
and we are told we can't work on anything with more than four
processors. We leave IBM a few months later.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Not long later am brought in as consultant to a small client/server
startup. Two of the former Oracle people (that were in the JAN1992
Oracle Ellison meeting) are there responsible for something called
"commerce server" and want to do payment transactions on the server,
the startup had also invented this technology they call "SSL" they
want to use, it is now frequently called "electronic commerce". I was
given responsibility for everything between webservers and the payment
networks.
The payment networks had been using circuit based technologies and
their trouble desk standard included doing 1st level problem
determination within 5mins. The initial testing of "electronic
commerce" had experienced network failure and after 3hrs was closed as
NTF (no trouble found). I had to bring the webserver "packet based"
operation up to the payment networks standards. Later, I put together
a talk on "Why Internet Isn't Business Critical Dataprocessing" based
on the software, documentation, procedures, etc, that I had to do for
"electronic commerce", which Internet Standards/RFC editor, Postel
https://en.wikipedia.org/wiki/Jon_Postel
sponsored at ISI/USC.
Other trivia: In early 80s, I got IBM HSDT effort, T1 and faster
computer links (both terrestrial and satellite) resulting in various
battles with the communication group (In the 60s, IBM had 2701
telecommunication controller that supported T1 links, but the
transition to SNA/VTAM in the mid-70s and associated issues, seemed to
cap links at 56kbits/sec). Was working with NSF director and was
suppose to get $20M to interconnect the NSF Supercomputing
centers. Then congress cuts the budget, some other things happen and
finally a RFP is released (in part based on what we already had
running). From NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.
Note IBM transition in the 80s from source available to "object-code
only" resulting in the OCO-Wars with customers ... some of this can be
found in the VMSHARE archives ... aka TYMSHARE had offered their
VM370/CMS-based online computer conferencing system, free to
(mainframe user group) SHARE starting in Aug1976 ... archives here:
http://vm.marist.edu/~vmshare
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some Business Critical Dataprocessing posts
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Unbundling, Software Source and Priced
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Unbundling, Software Source and Priced
Date: 26 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
OS/360 SYSGEN was mostly specifying (STAGE1) hardware configuration
and system features .... would "assemble" with macros ... the macros
mostly generated (STAGE2) several hundred cards of job control
statements that selected which executables to be move/copied to the
system disk(s). I started reworking sequence of STAGE2 cards to order
datasets and PDS (executable) members to improve/optimize disk arm
seeks and multi-track searches.
some recent posts mention sysgen/stage2, optimize, arm seek,
multi-track search
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#60 IBM 3705
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
--
virtualization experience starting Jan1968, online at home since Mar1970
Computer System Performance Work
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer System Performance Work
Date: 27 Nov, 2024
Blog: Facebook
1979, the largest national grocery store chain was having severe store
operation performance issues (12-13? regions partitioned across the
systems). They had large ibm dsd/pok multi-system shared dasd
datacenter and apparently had all the usual DPD & DSD/POK performance
experts through before they got around to asking me. I was brought
into large classroom with tables covered with system activity
performance reports. After about 30mins I noticed that during worst
performance periods, the aggregate activity of a specific 3330
(summing across all the systems) flatlined between 6&7 physical I/Os
per second. It turned out it was shared 3330 with large PDS dataset
for the store controller applications. It had three cylinder PDS
directory and basically disk was spending nearly all its time doing
PDS directory full cylinder multi-track searches ... resulting in
peak, aggregate store controller application load throughput of
two/sec for all the stores in the country. Two things, partition the
applications into multiple PDS datasets on different disks and then
replicate private sets for each system (on non-shared disks with
non-shared controllers).
Mid-80s, the communication group was fiercely fighting off
client/server and distributed computing, including blocking release of
mainframe TCP/IP support. When that was reversed, they changed their
tactic, since the communication group had corporate strategic
responsibility for everything that crossed datacenter walls, it had to
be released through them. What shipped got aggregate 44kbytes/sec
throughput using nearly whole 3090 processor. I then did changes to
support RFC1044 and in some tuning tests at Cray Research between Cray
and 4341, got sustained 4341 channel throughput using only modest
amount of 4341 processor (something like 500 times improvement in
bytes moved per instruction executed).
Turn of century (after leaving IBM) brought into large financial
outsourcing datacenter that was handling half of all credit card
accounts in the US and had 40+ systems (@$30M per/40+*$30M.. >$1.2B,
number needed to finish settlement in the overnight batch window), all
running the same 450K statement COBOL program. They had a large group
that had been handling performance care and feeding for decades but
got somewhat myopic. In the late 60s and early 70s there was lots of
performance analysis technologies developed at the IBM Cambridge
Scientific Center ... so I tried some alternate technologies for a
different view and found 14% improvement.
In the 60s, at univ took 2 credit hr intro to fortran/computers and at
the end of the semester, was hired to rewrite 1401 MPIO in assembler
for 360/30 (I was given lots of hardware&software manuals and got
to design and implement my own monitor, device drivers, error
recovery, storage management, etc ... and within a few weeks had 2000
card program). Univ. was getting 360/67 tss/360 replacing 709/1401 and
360/30 temporarily replaced 1401 pending getting 360/67. Within year
of taking intro class, 360/67 arrived and I was hired fulltime
responsible of os/360 (tss/360 never came to production and ran as
360/65). Student Fortran ran under second on 709 but well over minute
on 360/67 os/360. I install HASP cutting time in half. I then start
redoing stage2 sysgen, instead of starter system sysgen, run in
production system with HASP and statements reordered to carefully
place datasets and PDS members for optimizing arm seek and multi-track
searches .... getting another 2/3rds improvement down to 12.9secs
... never got better than 709 until I install Univ. of Waterloo
WATFOR.
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
some posts mentioning (multi-track search) grocery store and univ work
as well as 450k statement COBOL for credit card processing
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2022c.html#70 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
--
virtualization experience starting Jan1968, online at home since Mar1970
Computer System Performance Work
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer System Performance Work
Date: 28 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
CSC comes out to the univ. to install CP67 (3rd after CSC itself and
MIT Lincoln Labs) and I would mostly play with it during my weekend
dedicated time. I initially start out rewriting large parts to improve
running OS/360 in CP67 virtual machine. Test stream ran 322secs on
real machine, initially 856secs in virtual machine (534secs CP67
CPU). After a couple months I got CP67 CPU down to 113secs (from
534). I then start redoing other parts of CP67, page replacement
algorithm, thrashing controls, scheduling (aka dynamic adaptive
resource management), ordered arm seek queuing, multiple chained page
requests channel programs optimizing transfers/revolution (exp 2301
paging drum peak from 80/sec to 270/sec).
After graduating and joining IBM CSC, one of my hobbies was enhanced
production operating systems for internal datacenters. Then in the
morph from CP67->VM370, a lot of stuff was simplified and/or dropped
and starting with VM370R2, I began adding lots of it back
in. 23jun1969 unbundling announce included charging for software
(although they were able to make the case that kernel software was
still free).
Then the IBM Future System project, completely different and
completely replace 370 and internal politics was killing off 370
efforts, claim was the lack of new IBM 370 products during FS gave the
370 clone makers their market foothold (all during FS I continued to
work on 360&370 stuff, and periodically ridiculing FS). When FS
finally implodes, there is mad rush to get stuff back in the 370
product pipelines, including kicking of quick&dirty 3033&3081 efforts
in parallel.
Also the rise of clone 370 makers contributed to decision to start
charging for kernel software and a bunch of my internal production
VM370 stuff was selected for the guinea pig. A corporate expert
reviewed it and said he wouldn't sign off because it didn't have any
manual tuning knobs which were "state-of-the-art". I tried explaining
dynamic adaptive, but it fell on deaf ears. I package up some manual
tuning knobs as a joke and call it SRM ... as parody on MVS SRM vast
array, with full source code and documentation, joke was from
operations research, the dynamic adaptive had greater degrees of
freedom than the SRM values so could dynamically compensate for any
manual setting; I package all the dynamic adaptive as "STP" (from TV
adverts "the racer's edge").
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement and thrashing control posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
some recent posts mentioning charging for kernel software
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024e.html#83 Scheduler
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#31 HONE &/or APL
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#20 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#45 Wheeler Scheduler
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#111 Copyright Software
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#54 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#110 APL
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
--
virtualization experience starting Jan1968, online at home since Mar1970
What is an N-bit machine?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is an N-bit machine?
Newsgroups: comp.arch
Date: Thu, 28 Nov 2024 11:45:38 -1000
jgd@cix.co.uk (John Dallman) writes:
In early computer designs, arithmetic registers were much longer than
addresses, the classic examples being machines with 36-bit words and 15-
to 18-bit addresses.
Large logical address spaces started with the IBM 360, which had 32-bit
arithmetic registers and 32-bit address registers. You couldn't put
32-bits worth of physical memory in a machine for over a decade after it
appeared, but it was allowed for in the architecture.
360 had 32bit registers but addressing only used 24bits (16mbyte)
... except for 360/67 virtual memory mode which had 32bit addressing
(when got around to adding virtual memory to all 370s, it was only 24bit
addressing ... it wasn't until the 80s with 370/xa that 31bit addressing
was introduced). 360/67 also allowed for all (multiprocessor) CPUs to
address all channels (360/65 multiprocessor and later 370 multiprocessor
simulated multiprocessor I/O with multi-channel controllers connected to
different dedicated processor channels at the same address).
360/67
https://bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf
https://bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf
Before 370/xa, MVS was getting so bloated that they did hack to 3033
for 64mbyte real memory ... still 24bit (real & virtual) instruction
addressing ... but they scavanged two unused bits in the virtual
memory 16bit PTE, used to prefix the 12bit page numbers (4096 4096byte
pages ... 16mbyte) for 14bit page numbers (16384 4096byte pages) aka
translating 24bit (virtual) addresses into 26bit (real) addresses
(64mbytes) ... pending 370/xa and 31bit
posts mentioning some hacks that had to craft onto 3033
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2012e.html#80 Word Length
https://www.garlic.com/~lynn/2011f.html#50 Dyadic vs AP: Was "CPU utilization/forecasting"
--
virtualization experience starting Jan1968, online at home since Mar1970
What is an N-bit machine?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is an N-bit machine?
Newsgroups: comp.arch
Date: Thu, 28 Nov 2024 12:44:23 -1000
jgd@cix.co.uk (John Dallman) writes:
Apples and oranges. IBM had fewer but much larger customer organisations,
and could not afford to upset them much. Most IBM mainframe shops write
some software themselves; that wasn't the case for Apple users in the
1980s.
re:
https://www.garlic.com/~lynn/2024g.html#31 What is an N-bit machine?
Amdahl had won the battle to make ACS, 360 compatible, then when ACS/360
was canceled, he left IBM and formed his own 370 clone mainframe
company.
https://people.computing.clemson.edu/~mark/acs_end.html
Circa 1971, Amdahl gave talk in large MIT auditorium and somebody in
the audience asked him what justifications he used to attract
investors and he replied that even if IBM were to completely walk away
from 370, there was hundreds of billions in customer written
360&370 code that could keep him in business through the end of
the century.
At the time, IBM had the "Future System" project that was planning on
doing just that ... and I assumed that was what he was referring to
... however in later years he claimed that he never had any knowledge
about "FS" (and had left IBM before it started).
trivia: during FS, internal politics was killing off 370 projects and
claims are the lack of new 370 products in the period is what gave the
clone 370 makers (including Amdahl) their market foothold. some more
info
http://www.jfsowa.com/computer/memo125.htm
when FS finally imploded, there was mad rush to get stuff back into the
370 product pipelines, including kicking off the quick&dirty 3033&3081
efforts.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning Amdahl's talk at MIT
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017g.html#22 IBM Future Sytem 1975, 1977
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014h.html#65 Are you tired of the negative comments about IBM in this community?
https://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2009p.html#82 What would be a truly relational operating system ?
https://www.garlic.com/~lynn/2008s.html#17 IBM PC competitors
https://www.garlic.com/~lynn/2007f.html#26 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2003i.html#3 A Dark Day
--
virtualization experience starting Jan1968, online at home since Mar1970
SUN Workstation Tidbit
From: Lynn Wheeler <lynn@garlic.com>
Subject: SUN Workstation Tidbit
Date: 28 Nov, 2024
Blog: Facebook
some people from stanford came to ibm palo alto science center (PASC)
to ask if IBM would build/sell a workstation they developed. PASC put
together review with several IBM locations and projects (including
ACORN eventually announced as ibm/pc, only one that made it out as
product). All IBM locations and projects claimed they were doing
something much better ... and IBM declines. Stanford people then form
their own company, SUN.
ibm unix workstation, 801/risc, iliad, romp, rios, pc/rt, rs/6000,
power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
some posts mentioning Stanford people asking IBM to produce
workstation they developed
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2023c.html#11 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#11 Open Software Foundation
https://www.garlic.com/~lynn/2023.html#40 IBM AIX
https://www.garlic.com/~lynn/2022c.html#30 Unix work-alike
https://www.garlic.com/~lynn/2021i.html#100 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2017k.html#33 Bad History
https://www.garlic.com/~lynn/2017i.html#15 EasyLink email ad
https://www.garlic.com/~lynn/2017f.html#86 IBM Goes to War with Oracle: IT Customers Praise Result
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2013j.html#58 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM and Amdahl history (Re: What is an N-bit machine?)
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM and Amdahl history (Re: What is an N-bit machine?)
Newsgroups: comp.arch
Date: Fri, 29 Nov 2024 08:24:46 -1000
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
OTOH, Amdahl corporation did not make it until the end of the century
(at least not on its own; it became a subsidiary of Fujitsu), for two
reasons having to do with IBM not walking away from the S/360 family:
re:
https://www.garlic.com/~lynn/2024g.html##31 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html##33 What is an N-bit machine?
a little history drift ... IBM communication group had corporate
strategic ownership of everything that crossed the datacenter walls and
was fiercely fighting off client/server and distributed computing
(trying to preserve its dumb terminal paradigm). Late 80s, a senior disk
engineer got a talk scheduled at a world-wide, internal, annual
communication group conference supposedly on 3174 performance but opened
the talk with statement that the communication group was going to be
responsible for the demise of the disk division; the disk division was
seeing data fleeing datacenter to more distributed computing friendly
platforms with drops in disk sales. The disk division had tried to come
up with a number of solutions, but they were constantly being vetoed by
the communication group.
One of the disk division executive's (partial) countermeasure was
investing in distributed computing startups that would use IBM disks
(and would periodically ask us to drop in on investments to see if we
could help).
It wasn't just disks but whole mainframe industry and a couple years
later IBM had one of the largest losses in the history of US
corporations and was being reorged into the 13 "baby blues" (a take-off
on the AT&T baby blues and its breakup a decade early) in preparation
for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
(corporate hdqtrs) asking if we could help with the breakup. Before we
get started, the board brings in the former AMEX president as CEO to try
and save the company, who (somewhat) reverses the breakup.
note AMEX had been in competition with KKR for LBO (private-equity)
take-over of RJR and KKR wins, it then runs into some difficulties and
hires away AMEX president to help
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
later as IBM CEO, uses some of the same methods used at RJR:
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
In the 80s, IBM mainframe hardware was majority of IBM revenue but by
the turn of the century it was a few percent of revenue and dropping.
Around 2010-2013, mainframe hardware was a couple percent of IBM revenue
and still dropping, although the mainframe group was 25% of revenue (and
40% of profit) ... aka software and services.
... IBM was turning into a financial engineering company
IBM deliberately misclassified mainframe sales to enrich execs, lawsuit
claims. Lawsuit accuses Big Blue of cheating investors by shifting
systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business - and
moving said sales to its strategic business segments - in violation of
securities regulations.
flash-back: mid-80s, the communication group had been blocking release
of mainframe TCP/IP ... but when that was reversed, it changed its
tactic and said that since they had strategic ownership of everything
that crossed datacenter walls, it had to be released through them; what
shipped got aggregate of 44kbit/sec using nearly whole 3090 processor. I
then add support for RFC1044 and in some tuning tests at Cray Research
between Cray and 4341, got sustained 4341 channel media throughput using
only modest amount of 4341 processor (something like 500 times
improvement in bytes moved per instruction executed).
posts mentioning communication group stranglehold on mainframe
datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
some recent posts mentioning IBM becoming financial engineering
company
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#108 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024e.html#141 IBM Basic Beliefs
https://www.garlic.com/~lynn/2024e.html#137 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#124 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#77 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023f.html#22 We have entered a second Gilded Age
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023c.html#13 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#118 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#105 IBM 360
https://www.garlic.com/~lynn/2022f.html#105 IBM Downfall
https://www.garlic.com/~lynn/2022d.html#83 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022c.html#91 How the Ukraine War - and COVID-19 - is Affecting Inflation and Supply Chains
https://www.garlic.com/~lynn/2022c.html#46 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022b.html#115 IBM investors staged 2021 revolt over exec pay
https://www.garlic.com/~lynn/2022b.html#52 IBM History
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM and Amdahl history (Re: What is an N-bit machine?)
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM and Amdahl history (Re: What is an N-bit machine?)
Newsgroups: comp.arch
Date: Fri, 29 Nov 2024 11:51:07 -1000
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
OTOH, FS eventually led to S/38 and the System i, which IBM sold
rather than introducing low-end S/370 (and later s390 and s390x)
members. The way that Heinz Zemanek (head of IBM's Vienna Lab until
1976) told the story was that IBM was preparing to be divided up if
they lost the anti-trust action, and introduced S/38 and one other
line (that I don't remember) in addition to S/370 for that.
re:
https://www.garlic.com/~lynn/2024g.html##31 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html##33 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html##34 IBM and Amdahl history (Re: What is an N-bit machine?)
one of the last nails in the future system coffin was study by the IBM
Houston Scientific Center that if 370/195 applications were rewritten
for Future System machine made out of the fastest available
technology, it would have throughput of 370/145 (about factor of 30
times slowdown).
After graduating and joining IBM, one of my hobbies was enhanced
production operating systems for IBM internal datacenters ... and was
ask to visit lots of locations in US, world trade, europe, asia, etc
(one of my 1st and long time customers was the world-wide, branch
office, online sales&marketing support HONE systems). I continued to
work on 360/370 all through FS, even periodically ridiculing what they
were doing (it seemed as if the people were so dazzled by the blue sky
technologies, they had no sense of speeds&feeds).
I had done a paged mapped filesystem for CMS and claimed I learned
what not to do from TSS/360 single level store. FS single-level store
was even slower than TSS/360 and S/38 was simplified and slower yet
... aka for S/38 low-end/entry market there was plenty of head room
between their throughput requirements and the available hardware
technology, processing power, disk speed, etc. S/38 had lots of canned
applications for its market and very high-level, very simplified
system and programming environment (very much RPG oriented).
Early/mid 80s, my brother was regional Apple marketing manager and
when he came into town, I could be invited to business dinners
... including arguing MAC design with developers (before announce). He
had stories about figuring out how to remotely dial into the S/38
running Apple to track manufacturing and delivery schedules.
other trivia: late 70s, IBM had effort to move the large variety of
internal custom CISC microprocessors (s/38, low&mid range 370s,
controllers, etc) to 801/risc chips (with common programming
environment). First half 80s, for various reasons, those 801/RISC
efforts floundered (returning to doing custom CISC) and found some of
the 801/RISC chip engineers leaving IBM for other vendors.
1996 MIT Sloan The Decline and Rise of IBM
https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/?switch_view=PDF
1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CMS page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
What is an N-bit machine?
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is an N-bit machine?
Newsgroups: comp.arch
Date: Fri, 29 Nov 2024 17:42:25 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
IBM had legendary market power, all the way up to monopoly status.
Whatever it decreed, its market had to follow.
re:
https://www.garlic.com/~lynn/2024g.html##31 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html##33 What is an N-bit machine?
https://www.garlic.com/~lynn/2024g.html##34 IBM and Amdahl history (Re: What is an N-bit machine?)
https://www.garlic.com/~lynn/2024g.html##35 IBM and Amdahl history (Re: What is an N-bit machine?)
In the wake of the Future System mid-70s implosion, the head of POK
(high-end mainframe) also convinced corporate to kill the (virtual
machine) VM370 product, shutdown the development group and transfer
all the people to POK to work on MVS/XA (i.e. a lot of XA/370 changes
were to address various bloat&kludges in MVS/370). Come 80s, with 3081
and MVS/XA, customers weren't converting as planned, continuing to run
3081 370-mode with MVS/370. Amdahl was having more success, it had
developed microcode hypervisor/virtual machine ("multiple domain")
support and able to run both MVS/370 and MVS/XA concurrently on the
same (Amdahl) machine (note Endicott did eventually obtain VM370
product responsibility for the mid-range 370s, but had to recreate a
development group from scratch).
Amdahl had another advantage, initially 3081 was two processor only
and 3081D aggregate MIPS was less than the single processor Amdahl
machine. IBM doubles the processor cache sizes for the 2-CPU 3081K,
having about same aggregate MIPs as Amdahl single CPU .... however at
the time, IBM MVS documents had MVS two-processor (multiprocessor
overhead) support only getting 1.2-1.5 times the throughput of single
processor (aka Amdahl single processor getting full MIPS throughput
while MVS two processor 3081K loosing lots of throughput to
multiprocessor overhead).
POK had done a rudementary virtual software system for MVS/XA testing
... which eventually ships as VM/MA (migration aid) and then VM/SF
(system facility) ... however since 370/XA had been primarily focused
to compensate for MVS issues ... 3081 370/XA required a lot of
microcode tweaks when running in virtual machine mode and 3081
didn't have the space ... so switching in and out of VM/MA or VM/SF
with virtual machine mode, had a lot of overhead "paging"
microcode. It was almost a decade later before IBM was able to respond
to Amdahl's hypervisor/multiple-domain with 3090 LPAR & PR/SM
support.
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some posts mentioning Amdahl, hypervisor, multiple domain, 3081,
vm/ma, vm/sf, sie, page microcode, 3090, LPAR
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe User Group SHARE
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe User Group SHARE
Date: 30 Nov, 2024
Blog: Facebook
recent similar post in usenet comp.arch group
https://www.garlic.com/~lynn/2024g.html#36 What is an N-bit machine?
Long winded warning: Future System project, 1st half of 70s was
completely different than 370 and was going to completely replace 370;
internal politics during FS was killing off 370 efforts and claim was
that the lack of new 370 stuff gave clone 370 makers (including
Amdahl) their market foothold (and IBM marketing having to fall back
on lots of FUD). Then when FS imploded there was mad rush to get stuff
back into the 370 product pipelines.
When I had graduated and joined IBM, one of my hobbies was enhanced
production operating systems for internal datacenters and I continued
to work on 360&370 all during FS, even periodically ridiculing what
they were doing (and online branch office sales&marketing support HONE
systems was 1st and long time customer). In the morph of CP67->VM370
transition, lots of features were dropped (including multiprocessor
support) or greatly simplified. I then was adding lots of stuff back
in starting with VM370R2 (including re-org of the kernel for
multiprocessor support, but not actual multiprocessor support itself).
In the 23jun1969 unbundling announce included started charging for
(application) software (but managed to make the case that kernel
software should still be free). With the rise of clone 370 makers,
there was decision to start charging for new kernel software,
eventually transitioning to charging for all kernel software in the
80s (which was then followed by the OCO-wars) and bunch of my internal
stuff was selected as "charged-for" guinea pig (released with
VM370R3).
All the US HONE datacenters had been consolidated in Palo Alto
(trivia: when FACEBOOK 1st moves into Silicon Valley, it was into a
new bldg built next door to the former US HONE datacenter) and had
been upgraded to largest shared DASD, single-system-image operation
with load balancing and fall-over across the complex. I then put
multiprocessor support into a VM370R3-based CSC/VM, initially for US
HONE so they can add a 2nd processor to each system (16 CPUs
aggregate, each SMP system was getting twice throughput of single
processor system; combination of very short SMP overhead pathlengths
and some cache affinity hacks). When IBM wanted to release
multiprocessor support in VM370R4, there is a problem. Kernel charging
(transition) had requirement that hardware support was (still) free
and couldn't require charged-for software as pre-req (multiprocessor
kernel re-org pre-req was in my VM370R3-based charged-for
product). Eventually decision was made to move something like 80%-90%
of code from my "charged-for" VM370R3-bsed add-on product, into the
free VM370R4 base (w/o change in price for my VM370R4-based
charged-for product).
Part of the FS implosion and mad rush back to 370, Endicott cons me
into helping with ECPS microcode assist ... old post with initial
analysis for ECPS
https://www.garlic.com/~lynn/94.html#21
and another group cons me into helping with 16-cpu SMP multiprocessor
(in part because I was getting 2-CPU SMP throughput twice 1-CPU
throughput) and the 3033 processor engineers in helping in their spare
time. Everybody thought it was really great until somebody tells the
head of POK that it could be decades before the POK favorite son
operating systems ("MVS") had (effective) 16-CPU SMP support (MVS
documentation had 2-CPU SMP only getting 1.2-1.5 throughput of single
CPU); POK doesn't ship a 16-CPU SMP until after turn of the century.
He then directs some of us to never visit POK again and the 3033
processor engineers "heads down" and no distractions.
The head of POK also convinces corporate to kill VM370 product,
shutdown the development group and transfer all the people to POK for
MVS/XA (Endicott eventually manages to save the VM370 product mission
for the mid-range, but had to recreate a development group from
scratch). They weren't planning on telling the VM370 people ahead of
time, about shutdown & move to POK, to minimize those that might
escape the move. The information leaked early and several managed to
escape (this was in the early days of DEC VMS and joke was that the
head of POK was major contributor to VMS). There then was witch hunt
for the leak source, fortunately for me, nobody gave up the source.
POK executives were then going around to internal datacenters
(including HONE) trying to browbeat them into moving off VM/370 to MVS
(late 70s, HONE started sequence of 3-4 year-long programs
unsuccessfully trying to move to MVS; then in early 80s somebody
decided that HONE was unsuccessful in moving to MVS, because they were
running my enhanced CSC/VM systems ... so HONE got mandate to move to
vanilla VM370 product (then they would be able to move to MVS).
Early 80s, transition to "charging-for" all kernel software was
complete and then begins 2nd part, software becomes object-code-only
and the "OCO-wars".
Later customers weren't converting to MVS/XA as planned but Amdahl was
having more success, Amdahl had a purely microcoded HYPERVISOR
("multiple domain") and was able to run MVS and MVS/XA concurrently on
the same machine (helping customers to migrate). POK had done a
rudimentary virtual machine system for MVS/XA testing (never intended
for customer release) ... which POK eventually ships as VM/MA
(migration aid) and then VM/SF (system facility) ... however since
370/XA had been primarily focused to compensate for MVS issues
... 3081 370/XA required a lot of microcode tweaks when running in
virtual machine mode and 3081 didn't have the space ... so switching
in and out of VM/MA or VM/SF with virtual machine mode, had a lot of
overhead "paging" microcode (it was almost a decade later before IBM
was able to respond to Amdahl's hypervisor/multiple-domain with 3090
LPAR & PR/SM support).
Note: 3081s originally were only to be multiprocessor and initial
3081D aggregate MIP rate was less than Amdahl single processor MIP
rate. IBM doubles processor cache size for 3081K which give it about
same aggregate MIP rate as Amdahl 1-CPU systems (although Amdahl 1-CPU
had higher MVS throughput since MVS 2-CPU overhead only got 1.2-1.5
times the throughput of single CPU systems)
The VMMA/VMSF people then had proposal for a few hundred person group
to enhance VMMA/VMSF with the feature, function, and performance of
VM/370 for VM/XA. A possible alternative was an internal Rochester
sysprog had added full 370/XA support to VM370 ... but the POK group
prevails.
trivia: a corporate performance specialist reviewed my VM370R3-based
charge-for product and said that he wouldn't sign off release because
it didn't have any manual tuning knobs which were the current
state-of-the-art. I tried to explain Dynamic Adaptive Resource
Management/Scheduling (that I had originally done for CP67 as
undergraduate in the 60s). I then created some manual tuning knobs and
packaged as "SRM" (parody on the vast array of MVS SRM tuning knobs)
with full source, documentation and formulas ... and the dynamic
adaptive resource management/scheduling was packaged as "STP"
("racer's edge" from TV advertisements). What most people never caught
was from operation research "degrees of freedom" ... the STP dynamic
adaptive could compensate for any SRM manual setting.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67I, CSC/VM, and SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management/scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
some other recent posts mentioning OCO-wars
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#114 REXX
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#111 Copyright Software
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023c.html#41 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#23 IBM VM370 "Resource Manager"
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe User Group SHARE
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe User Group SHARE
Date: 30 Nov, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
Note "SLAC First VM 3081" button better than 3033. Folklore is 1st
3033 order was VM370 customer and it was going to be great loss of
face for POK, especially since they had only recently convinced
corporate to kill the VM370 product ... and gov. regs required
machines ship in the sequence they were ordered. They couldn't do
anything about the shipping sequence, but they managed to fiddle the
van delivery making a MVS 3033 the first "install" (for publications
and publicity).
After transferring to SJR in late 70s, I got to wander around to lots
of IBM (and other) datacenters in silicon valley ... as well attending
the monthly BAYBUNCH meetings hosted by SLAC. Early 80s, I got
permission to give presentations on how ECPS was done ... normally
after meetings we adjourn to local watering holes and the Amdahl
people tell me about in process of developing MACROCODE and HYPERVISOR
... and grilling me about more details on ECPS.
I counter with SLAC/CERN, initially 168E & then 3081E ... sufficient
370 to run fortran programs to do initial data reduction along
accelerator line.
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf
And SLAC had the first webserver in the US on their VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
I also got to wander around disk bldg 14/engineering and
15/product-test across the street, they were running 7x24,
prescheduled, stand alone mainframe testing. They mentioned they had
recently tried MVS but it had 15min MTBF (requiring manual re-ipl) in
that environment. I offer to rewrie I/O supervisor to make it bullet
proof and never fail, allowing any amount of concurrent, ondemand
testing, greatly improving productivity (downside was they kept
calling me to spend increasing amount of time playing disk
enginneer). Bldg15 would get early engineering mainfames for disk i/o
test and got the first 3033 outside POK enginneering, I/O testing took
only a percent or two of 3033 CPU, so we scrounge up a 3830 controller
and 3330 string and setup our own online service.
Air bearing simulation was being run on SJR 370/195 (part of thin film
disk head design, first ships with 3370FBA), but only getting a few
turn-arounds/month. We set it up on bldg15 3033 (only half 195 MIPs)
and was getting several turn-arounds/day. Also run a 3270 coax
underneath the street to my office in bldg28.
They then get engineering 4341 and branch office finds out about it
and cons me into doing (CDC6600 fortran) benchmark on it for a
national lab looking at getting 70 for compute farm (sort of the
leading edge of the coming cluster supercomputing tsunami).
posts about getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
some recent posts mentioning 4341 benchmark
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
--
virtualization experience starting Jan1968, online at home since Mar1970
Applications That Survive
From: Lynn Wheeler <lynn@garlic.com>
Subject: Applications That Survive
Date: 01 Dec, 2024
Blog: Facebook
Undergraduate in the 60s took two credit hr intro to fortran/computers
and at end of semester was hired to rewrite 1401 MPIO in 360 assembler
... univ was getting 360/67 replacing 709/1401 and pending 360/67 (for
tss/360), temporarily got 360/30 replacing 1401 (30 had 1401
emulation, so I was just part of exercise in getting to know 360, I
was given pile of hardware&software manuals and got to design &
implement my own monitor, device drivers, interrupt handlers, error
recovery, storage management, etc; within a few weeks had 2000 card
assembler program). Univ. datacenter shutdown for weekend and I got
the whole place to myself, although 48yrs w/o sleep made monday
classes hard. Within a year of taking intro class, 360/67 came in and
I was hired fulltime responsible for os/360 (tss/360 never came to
production, so ran as 360/65) and I continued to have my weekend
dedicated 48hrs.
Univ. had 407 plug-board (admin financial) application that had been
redone in 709 cobol simulating 407 plug-board, including printing 407
sense switch values at the end ... that was ported to os/360. One day
the program ended with different sense switch values. They stopped all
production work while looking for somebody that knew what it meant;
after a couple of hrs (shutdown) they decided to run it again to see
what happens.
other trivia: 709 (tape->tape) ran student fortran in under second,
initially ran over minute 360/67 (as 360/65) os/360. I install HASP
cuts time in half. I then start redoing STAGE2 SYSGEN to place
datasets and PDS members to optimize arm seek and multi-track search
cutting another 2/3rds to 12.9secs; student fortran never got better
than 709 until I install Univ. Waterloo WATFOR. Before I graduate, I'm
hired fulltime into a small group in the Boeing CFO office to help
with the formation of Boeing Computer Services (consolidate all
dataprocessing into independent business unit). I think Renton
datacenter largest in the world, 360/65s arriving faster than could be
installed, boxes constantly staged in the hallway around the machine
room. Then part of disaster planning, they decide to replicate Renton
at the new 747 plant up in Everett ... another large number of 360/65s
(somebody joked that Boeing was getting 360/65s like other companies
got keypunches).
Early 80s, I'm introduced to John Boyd and would sponsor his briefings
at IBM. He had lots of stories, including about being vocal that
electronics across trail wouldn't work and possibly as punishment he
is put in command of spook base (about the same time I'm at
Boeing). His biographies have spook base a $2.5B "wind fall" for IBM
(ten times Renton), Boyd would comment "spook base" datacenter had the
largest air conditioned bldg in that part of the world ... refs:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White
misc. note: Marine Corps Commandant 89/90 leverages Boyd for make-over
of Corps ... a time when IBM was desperately in need of make-over and
a couple years later IBM has one of the largest losses in the history
of US companies and was being re-orged into the 13 "baby blues"
(take-off on AT&T "baby bells" and breakup a decade earlier) preparing
for breakup.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup. longer winded account
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
Boyd posts
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
recent posts mentioning univ 709/1401, 360/67, WATFOR, Boeing CFO, and
Renton
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
--
virtualization experience starting Jan1968, online at home since Mar1970
We all made IBM 'Great'
From: Lynn Wheeler <lynn@garlic.com>
Subject: We all made IBM 'Great'
Date: 01 Dec, 2024
Blog: Facebook
... periodically reposted in various threads
Co-worker at cambridge science center was responsible for the science
center CP67-based wide-area network which morphs into the IBM internal
network (larger than arpanet/internet from the beginning until
sometime mid/late 80s, about the time communication group forced the
internal network to convert to SNA/VTAM) and technology also used for
the corporate sponsored Univ BITNET ("EARN" in Europe) ... account
from one of the inventors of GML (at CSC, 1969)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Edson passed Aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
Early 80s, I got HSDT project, T1 and faster computer links (both
satellite and terrestrial) and some amount of battles with
communication group. 60s, IBM had 2701 telecommunication controller
that supported T1 links. Mid-70s IBM moves to SNA/VTAM and various
issues seem to cap controllers at 56kbit links. Mid-80s I reported to
same executive as person responsible for AWP164 (aka APPN) and I
periodically needle him about coming over and working on "real"
networking (TCP/IP). When they went for APPN product announcement, the
communication group vetoed it. The announcement then was carefully
rewritten to not imply any relationship between APPN and SNA. Sometime
later they rewrite history to claim APPN part of SNA.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Also mid-80s, communication group was blocking mainframe TCP/IP
announce (part of fiercely fighting off client/server and distributed
computing trying to preserve their dumb terminal paradigm). When that
was reversed, they then claimed that since they had corporate
strategic responsibility for everything that cross datacenter walls,
it had to be shipped through them. What is delivered got aggregate
44kbytes/sec using nearly whole 3090 processor. I then do RFC1044
support and in some tuning tests at Cray Research between Cray and
4341, got sustained 4341 channel media throughput using only modest
amount of 4341 processor (something like 500 times improvement in
bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
Late 80s, senior disk engineer got talk scheduled at annual,
world-wide, internal communication group conference supposedly on 3174
performance but opens the talk with statement that the communication
group was going to be responsible for the demise of the disk
division. They were seeing data fleeing mainframe datacenters to more
distributed computing friendly platforms with drop in disk sales. They
had come up with several solutions that were all vetoed by the
communication group. Communication group mainframe datacenter
stranglehold wasn't just disks and a couple years later IBM has one of
the largest losses in the history of US companies and was being
re-orged into the 13 "baby blues" (take-off on AT&T "baby bells"
and breakup a decade earlier) in preparation for breaking up the
company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
Communication group protecting dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
a few past posts mentioning AWP164/APPN
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013j.html#66 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012k.html#68 ESCON
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
--
virtualization experience starting Jan1968, online at home since Mar1970
We all made IBM 'Great'
From: Lynn Wheeler <lynn@garlic.com>
Subject: We all made IBM 'Great'
Date: 02 Dec, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#40 We all made IBM 'Great"
Learson tries (and fails) to block the bureaucrats, careerists and
MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
more recent
https://www.garlic.com/~lynn/2024g.html#23 IBM Move From Leased To Sales
and after turn of century (and former AMEX president leaves for
Carlyle), and IBM becomes financial engineering company
https://www.garlic.com/~lynn/2024g.html#26 IBM Move From Leased To Sales
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
some recent posts mentions becoming financial engineering company
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#108 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024e.html#124 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#77 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
--
virtualization experience starting Jan1968, online at home since Mar1970
Back When Geek Humour Was A New Concept To Me
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Back When Geek Humour Was A New Concept To Me ...
Newsgroups: alt.folklore.computers
Date: Mon, 02 Dec 2024 14:14:31 -1000
I (and others) keynote at NASA/CMU Dependable Computing workshop
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html
When I first transfer out to SJR in 2nd half of 70s, I get to wander
around IBM (and other) datacenters, including disk bldg14/enginneering
and bldg15/product-test across the street. They were running 7x24,
prescheduled, stand alone testing and mentioned that they had recently
tried MVS ... but it had 15min MTBF (in that environment), requiring
re-ipl. I offer to rewrite I/O supervisor to make it bullet proof and
never fail so they could do any amount of on-demand, concurrent testing,
greatly improving productivity (downside was that they wanted me to
increasingly spend time playing disk engineer). I do an internal
research report about "I/O integrity" and happen to mention the MVS
15min MTBF. I then get a call from the MVS group, I thot that they
wanted help in improving MVS integrity ... but it seems they wanted to
get me fired for (internally) disclosing their problems.
1980, IBM STL was bursting at the seams and they were moving 300
(people&3270s from IMS DBMS group) to offsite bldg with dataprocessing
back to STL datacenter ... they had tried "remote" 3270 support and
found the human factors unacceptable. I get con'ed into doing "channel
extender" support so they can place channel attached 3270 controllers at
the off-site bldg with no perceptable difference in the human factors
offsite and in STL. The vendor then tries to get IBM to release my
support but there is group in POK that get it vetoed (they were playing
with some serial stuff and afraid that if it was in the market, it would
make it difficult to releasing their stuff). The vendor then replicates
my implementation.
Role forward to 1986 and 3090 product administrator tracks me done.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
There was an industry service that collected customer mainframe EREP
(detailed error reporting) data and generated periodic summaries. The
3090 engineers had designed the I/O channels predicting there would be a
maximum aggregate of 4-5 "channel errors" across all customer 3090
installations per year ... but the industry summary reported total
aggregate of 20 channel errors for 3090s first year.
It turned out for certain types of channel-extender transmission errors,
I had selected simulating "channel error" in order to invoke channel
program retry (in error recovery) ... and the extra 15 had come from
customers running the channel-extender support. I did a little research
(various different kernel software) and found simulating IFCC (interface
control check) would effectively perform the same kinds of channel
program retry (and got the vendor to change their implementation from
"CC" to "IFCC").
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
some posts mentioning 3090 channel check ("CC") errors
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#27 STL Channel Extender
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2019c.html#16 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016f.html#5 More IBM DASD RAS discussion
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2010i.html#2 Processors stall on OLTP workloads about half the time--almost no matter what you do
https://www.garlic.com/~lynn/2009l.html#60 ISPF Counter
https://www.garlic.com/~lynn/2008q.html#33 Startio Question
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2007l.html#7 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007f.html#53 Is computer history taught now?
https://www.garlic.com/~lynn/2006n.html#35 The very first text editor
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
--
virtualization experience starting Jan1968, online at home since Mar1970
Apollo Computer
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Apollo Computer
Newsgroups: alt.folklore.computers
Date: Mon, 02 Dec 2024 14:51:22 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Wasn't PRISM good enough? Was it too expensive, maybe? From what I've
heard, the 88000 family weren't particularly wonderful performance-wise,
which is why hardly anybody made use of them. So, after dragging their
feet over Unix and then X11 support, yet another in a series of
questionable strategic decisions from the company? Which is why, after a
few more years, it ceased to exist altogether.
POWER/RIOS (six chip) chipset didn't support multiprocessor cache
coherency so scale-up was cluster. Executive we reported to (when doing
HA/CMP),
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
then went over to head up Somerset (i.e. AIM; apple, ibm, motorola)
single-chip processor and I somewhat characterize as adding motorola
88k risc multiprocessor cache coherency ... then can have large
scalable clusters of multiprocessor systems (rather than just clusters
of single processor systems)
https://en.wikipedia.org/wiki/AIM_alliance
https://wiki.preterhuman.net/The_Somerset_Design_Center
https://en.wikipedia.org/wiki/IBM_Power_microprocessors#PowerPC
https://en.wikipedia.org/wiki/Motorola_88000
In the early 1990s Motorola joined the AIM effort to create a new RISC
architecture based on the IBM POWER architecture. They worked a few
features of the 88000 (such as a compatible bus interface[10]) into
the new PowerPC architecture to offer their customer base some sort of
upgrade path. At that point the 88000 was dumped as soon as possible
... snip ...
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home