List of Archived Posts
2025 Newsgroup Postings (01/01 - )
- IBM APPN
- IBM APPN
- IBM APPN
- IBM Tape Archive
- Dataprocessing Innovation
- Dataprocessing Innovation
- IBM 37x5
- Dataprocessing Innovation
- IBM OS/360 MFT HASP
- John Boyd and Deming
- IBM 37x5
- what's a segment, 80286 protected mode
IBM APPN
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 01 Jan, 2025
Blog: Facebook
For awhile I reported to the same executive that the person
responsible for AWP164 (turns into APPN for AS/400) ... I would chide
him to come over and work on real networking (TCP/IP) because the SNA
organization would never appreciate him. Then SNA vetoes the original
AS/400 APPN 1986 announcement ... in the escalation process, the
announcement letter was carefully rewritten to not imply any
relationship between APPN and SNA.
trivia: back in the 70s period when SNA 1st appeared, my (future) wife
was co-author of AWP39, networking architecture ... they had to
qualify it "Peer-to-Peer Networking Architecture" because SNA had
misused the term "Network" (when it wasn't). she was then con'ed into
going to POK to be responsible for loosely-coupled system architecture
where she did Peer-Coupled Shared Data Architecture".
She didn't remain long, in part because of little uptake (until much
later for SYSPLEX, except for IMS hot-standby) and in part because of
periodic battles with SNA group trying to force her into using VTAM
for loosely-coupled operation.
Peer-Coupled Shared Data Architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM APPN
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 01 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
My wife tells of asking Vern Watts who he was going to ask permission
to do IMS hot-standby, he says "nobody" ... he would just tell them
when it was all done.
https://www.vcwatts.org/ibm_story.html
The Tale of Vern Watts. The long, inspired career of an IBM
Distinguished Engineer and IMS inventor
SNA organization was fighting off release of mainframe TCP/IP support,
when they lost ... they changed their tactic and said that since they
had corporate responsibility for everything that cross datacenter
walls, it had to be released through them. What shipped got 44kbytes
aggregate using nearly whole 3090 processor. It was then released for
MVS by simulating VM370 diagnose API .... which further aggravated CPU
use (in MVS).
I then add RFC1044 support and in some tuning tests at Cray Research
between Cray and VM/4341, getting sustained 4341 channel throughput
using only modest amount of 4341 processor (something like 500 times
improvement in bytes moved per instruction executed).
A univ. did a study comparing MVS VTAM LU6.2 pathlength (160K
instructions and 15 buffer copies) compared to UNIX TCP pathlength (5k
instructions and five buffer copies). The SNA organization fought me
being on the XTP technical advisory board ... working on HSP
(high-speed protocol) standard that included direct TCP transfers
to/from application space (with no buffer copies), with scatter/gather
(unix "chained data").
Later in the 90s, SNA organization hires a silicon valley contractor
(former Amdahl employee that I had known from SHARE meetings since
early 70s, who recently passed) to implement TCP/IP directly in
VTAM. What he demo'ed had TCP/IP significantly higher throughput than
LU6.2. He was then told that everybody knows that a "proper" TCP/IP
implementation is much slower than LU6.2, and they would only be
paying for a "proper" implementation
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM APPN
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM APPN
Date: 01 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2025.html#1 IBM APPN
Co-worker at science center was responsible for the CP67-based
wide-area network from 60s, that morphs into the corproate internal
network (larger than arpanet/internet from the beginning until
sometime mid/late 80s, about the time the SNA-org forced the internal
network to be converted to SNA/VTAM). Account by one of the inventors
of GML at the science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Edson (passed Aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
We then transfer out to san jose research in 1977 and in early 80s, I
get HSDT, T1 and faster computer links, some amount of conflict with
SNA-org (note in 60s, IBM had 2701 telecommunication controller that
supported T1 links, however IBM's move to SNA in the mid-70s and
associated issues seem to have capped links at 56kbits/sec). Was
working with the NSF director and was suppose to get $20M to
interconnect the NSF Supercomputer centers. Then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed). The NSF
director tried to help by writing the company a letter (3Apr1986, NSF
Director to IBM Chief Scientist and IBM Senior VP and director of
Research, copying IBM CEO) with support from other gov. agencies
... but that just made the internal politics worse (as did claims that
what we already had operational was at least 5yrs ahead of the winning
bid), as regional networks connect in, it becomes the NSFNET backbone,
precursor to modern internet.
SJMerc article about Edson, "IBM'S MISSED OPPORTUNITY WITH THE
INTERNET" (gone behind paywall but lives free at wayback machine),
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
trivia: HSDT first long-haul T1 link was between IBM Los Gatos lab (on
west coast) and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston, NY (on the east coast) ... where he had a
whole boat load of Floating Point Systems boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Tape Archive
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Tape Archive
Date: 01 Jan, 2025
Blog: Facebook
I had archive tape of files and email from time at univ in 60s through
1977 at IBM science center, starting with 800bpi, copied to 1600bpi,
then to 6250 and finally to 3480 cartridge ... triple replicated in
the IBM Almaden Research tape library. Mid-80s, Melinda asked if I had
the original implementation of multi-level CMS source update (done in
exec iterating using temp files and sequence of CMS update command). I
managed to pull it off tape and email it to her.
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908
That was fortunate, shortly later Almaden started experiencing
operational problems mounting random tapes as scratch ... and
eventually found I lost nearly a dozen tapes (including all triple
replicated 60s&70s archive).
other Melinda email from the era (and I had learned to not keep
replicated copies in Almaden)
https://www.garlic.com/~lynn/2007b.html#email860111
https://www.garlic.com/~lynn/2011b.html#email860217
https://www.garlic.com/~lynn/2011b.html#email860217b
csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning almaden tape library
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024d.html#51 Email Archive
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024b.html#7 IBM Tapes
https://www.garlic.com/~lynn/2024.html#39 Card Sequence Numbers
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2018e.html#65 System recovered from Princeton/Melinda backup/archive tapes
https://www.garlic.com/~lynn/2017i.html#76 git, z/OS and COBOL
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014e.html#28 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2012i.html#22 The Invention of Email
https://www.garlic.com/~lynn/2011g.html#29 Congratulations, where was my invite?
https://www.garlic.com/~lynn/2011f.html#80 TSO Profile NUM and PACK
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; Now : Programming practices
--
virtualization experience starting Jan1968, online at home since Mar1970
Dataprocessing Innovation
From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 01 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
some CICS history ... website gone 404, but lives on at the wayback
machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
post about taking 2hr intro to fortran/computers, univ was getting
360/67 (for tss/360) replacing 709/1401 and within yr of taking intro
class, it comes in and I'm hired fulltime responsible for os/360
(tss/360 doesn't come to fruition)
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
more reference in thread
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#110 IBM 370 Virtual Storage
Univ. library gets ONR grant to do an online catalog, some of the
money goes for a 2321/datacell ... effort was also selected as
betatest for the CICS product and CICS was added to my tasks. One of
the 1st problems was CICS wouldn't come up ... eventually track to
CICS had some hard-coded BDAM options/features ... that weren't
documented/specified and the library had created BDAM files with
different set of options.
posts mentioning CICS and/or BDAM
https://www.garlic.com/~lynn/submain.html#cics
--
virtualization experience starting Jan1968, online at home since Mar1970
Dataprocessing Innovation
From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 02 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#4 Dataprocessing Innovation
some related CSC and virtual machine work ...
Some of the MIT CTSS/7094 people go to the 5th flr and Multics and
others go to the 4th flr and the IBM Cambridge Scientific Center. CSC
was expecting the MULTICS be awarded to IBM (CSC), but instead it goes
to GE.
Melinda Varian's history
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/neuvm.pdf
from above, Les Comeau has written (about TSS/360):
Since the early time-sharing experiments used base and limit registers
for relocation, they had to roll in and roll out entire programs when
switching users....Virtual memory, with its paging technique, was
expected to reduce significantly the time spent waiting for an
exchange of user programs.
What was most significant was that the commitment to virtual memory
was backed with no successful experience. A system of that period that
had implemented virtual memory was the Ferranti Atlas computer, and
that was known not to be working well. What was frightening is that
nobody who was setting this virtual memory direction at IBM knew why
Atlas didn't work.35
... snip ...
A motivation for CSC to do virtual memory hardware mods to 360/40 and
(virtual machine/memory) CP40/CMS, was to study virtual memory
operation.
Atlas reference (gone 403?, but lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
from above:
Paging can be credited to the designers of the ATLAS computer, who
employed an associative memory for the address mapping [Kilburn, et
al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words
per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in
32 page frames). Thus a 220-word virtual memory was provided for a
214- word machine. But the original ATLAS operating system employed
paging solely as a means of implementing a large virtual memory;
multiprogramming of user processes was not attempted initially, and
thus no process id's had to be recorded in the associative memory. The
search for a match was performed only on the page number p.
... snip ...
... referencing ATLAS used paging for large virtual memory ... but not
multiprogramming (multiple concurrent address spaces). Cambridge had
modified 360/40 with virtual memory and associative lookup that
included both process-id and page number (aka both virtual memory and
multiple concurrent processes).
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
IBM does 360/67 standard with virtual memory for TSS/360. When 360/67
becomes available, CSC morphs CP40/CMS into CP67/CMS. At the time
TSS/360 was decomitted there were 1200 people involved with TSS/360
and 12 people in the CP67/CMS group.
As an undergraduate in 60s, I had been hired fulltime for OS/360
running on 360/67 (as 360/65, originally was suppose to be for
TSS/360). The univ shutdown datacenter on weekends and I would have it
dedicated (although 48hrs w/o sleep made Monday classes
difficult). CSC then came out to install CP/67 (3rd after CSC itself
and MIT Lincoln Labs) and I mostly played with it during my dedicated
time ... spent the 1st six months or so redoing pathlengths for
running OS/360 in virtual machine. OS/360 benchmark was 322secs on
bare machine, initially 856secs in virtual machine (CP67 CPU 534secs),
got CP67 CPU down to 113secs (from 534secs).
I redid (dynamic adaptive resource management) scheduling&paging
algorithms and added ordered seek for disk i/o and chained page
requests to maximize transfers/revolution (2301 fixed-head drum from
peak 70/sec to peak 270/sec). CP67 page replacement to global LRU
(at a time when academic literature was all about "local LRU"), which
I also deployed at Cambridge after graduating and joining IBM. IBM
Grenoble Scientific Center modified CP67 to implement "local" LRU
algorithm for their 1mbyte 360/67 (155 page'able pages after fixed
memory requirements). Grenoble had very similar workload as Cambridge
but their throughput for 35users (local LRU) was about the same as
Cambrige 768kbyte 360/67 (104 page'able pages) with 80 users (and
global LRU) ... aka global LRU outperformed "local LRU" with more
than twice the number of users and only 2/3rds the available real
memory.
other trivia: there were a couple of CP67 online commercial spinoffs
in the 60s (that specialized in services for financial industry) and
one of the Multic's people from 5th flr joins one as part of "First
Financial Language" offering on CP67/CMS (and later VM370/CMS). A
decade later he joins with another person to found Visicalc:
https://en.wikipedia.org/wiki/VisiCalc
GML was invented at CSC in 1969 and a decade later morphs into SGML
and after another decade morphs into HTML at CERN and the first
webserver in the US is on the Stanford SLAC VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
Account by one of the GML inventors about CP67-based wide-area
network. Later mophs into the corporate internal network (larger than
arpanet/internet from the start until sometime mid/late 80s, about the
time the internal network was forced to convert to SNA/VTAM).
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Technology also used for the corporate sponsored univ. BITNET:
https://en.wikipedia.org/wiki/BITNET
Person responsible (passes Aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about Edson's battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Previously mentioned in this post/thread, nearly the whole first
decade of SQL/relational System/R work was all done on VM370.
the first commercial relational (but not SQL) RDBMS was by the MULTICS
group on 5th flr (followed by Oracle, Ingres, etc), in part the IBM
System/R group faced enormous hurdles inside IBM. Was eventually able
to do tech transfer ("under the radar while company was focused on
IMS-followon "EAGLE") to Endicott for SQL/DS. When "EAGLE" implodes
there is request for how fast can System/R be ported to MVS
... eventually released as DB2 (initially for decision-support *ONLY*)
and finally ... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, XML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
Internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
online commercial (virtual machine) offerings
https://www.garlic.com/~lynn/submain.html#online
first sql/relational RDBMS, System/R
https://www.garlic.com/~lynn/submain.html#systemr
virtual memory and paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 37x5
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5
Date: 02 Jan, 2025
Blog: Facebook
When I joined IBM science center ... person responsible for science
centers' CP67-based wide-area network, had been trying to convince CPD
that they should use the much more capable (Series/1) Peachtree
processor (rather than the really anemic UC) for 37xx boxes. Reference
by one of the inventors of GML at the science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
The CP67-based wide-area network then morphs into the corporate
internal network, larger than the arpanet/internet from the start
until sometime mid/late 80s (about the time it was forced to convert
to SNA/VTAM) ... technology had also been used for the corporate
sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET
Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet
https://www.garlic.com/~lynn/subnetwork.html#bitnet
We transfer from CSC out to San Jose Research on the west coast in
1977 ... and in the early 80s, I get HSDT project, T1 and faster
computer links (note in the 60s, IBM had 2701 telecommunication
controllers that supported T1, but in the transition to SNA/VTAM and
37xx boxes in the mid-70s, issues seemed to cap links at
56kbits/sec). Part of HSDT funding was based on being able to show
some IBM content and I eventually found the FSD Series/1 T1 Zirpel
card that was done for government customers that still had 2701
controllers (that were all in the process of failing apart). I then
went to order half dozen S/1 and was told that IBM had recently bought
ROLM (which was data general shop) and ROLM had made a large S/1 order
that created a year's backlog (I eventually cut a deal with ROLM for
some of their order positions, I would help them with some of their
testing operations).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
trivia: As undergraduate I had been hired fulltime responsible OS/360
(on 360/67 run as 360/65). Had 2741 & tty/ascii terminal support with
a hack for dynamically determining terminal type (using SAD CCW to
change line port scanner type). I then want to have a single dial-in
number ("hunt group"), but IBM had taken short-cut and hardwired line
speed. Then start univ project to build our own clone controller,
build channel interface board for Interdata/3 programmed to emulate
IBM controller with addition could do automatic line baud rate, then
upgraded to Interdata/4 for channel interface and clusters of
Inerdata/3s for port interfaces (Interdata and later Perkin-Elmer
sells as clone controller and four of use get written up responsible
for some part of clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
Then before I graduate, I'm hired into a small group in the Boeing CFO
office to help with the formation of Boeing Computer Services,
consolidate all dataprocessing into an independent business unit. I
think Renton largest in the world, 360/65s arriving faster than could
be installed ... some joke that Boeing was installing 360/65s like
other companies installed keypunches.
When I graduate, I join IBM CSC (instead of staying with Boeing
CFO). One of my hobbies was enhanced production operating systems, and
the online sales&marketing support HONE systems was first (and
long time customer). With the announce of virtual memory for all 370s,
the decision was made to do CP67->VM370 morph ... which simplified
and/or dropped a lot of CP67 features. In 1974, I start moving a lot
of CP67 features to a VM370R2-base for my CSC/VM .... about the same
time all the US HONE datacenters were consolidated in 1501 California
(reconfigured into largest IBM single-system-image, loosely-coupled,
shared DASD operation with load-balancing and fall-over across the
complex). I then put SMP, tightly-coupled, multiprocessor support into
VM370R3-based CSC/VM, originally for HONE so they could add a 2nd
processor to each of systems. After transfer to SJR, I could easily
commute up to HONE a few times a month. As an aside, when FACEBOOK 1st
moved to silicon valley, it was into new bldg built next door to the
former US HONE complex.
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Mid-80s, I was also coned into a project to take a VTAM/NCP emulator
that one of the baby bells had done on Series/1 and turn it into a
type-1 product. Part of the process was using HONE 3725 "configurator"
to size a 3725-operation compared to their live S/1 operation which I
presented at a Raleigh SNA ARB meeting ... parts of that presentation:
https://www.garlic.com/~lynn/99.html#67
and part of baby bell presentation at IBM COMMON user group conference:
https://www.garlic.com/~lynn/99.html#70
Raleigh would constantly claim that the comparison was invalid but
never were able to explain why. A significant amount of effort went
into walling the project off from Raleigh influence in corporate
politics, but what was done next to kill the project can only be
described as truth is stranger than fiction.
--
virtualization experience starting Jan1968, online at home since Mar1970
Dataprocessing Innovation
From: Lynn Wheeler <lynn@garlic.com>
Subject: Dataprocessing Innovation
Date: 03 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024g.html#111 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#112 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#4 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
Jim was leaving for Tandem fall of 1980 and wanted to palm off some of
the system/r stuff on me. One was Bank of America System/R study
... which was getting 60 VM/4341s for putting out in branches running
System/R.
This was part of the leading edge of coming distributed computing
tsunami ... large corporations were ordering hundreds of vm/4341s at a
time for putting out in departmental areas (inside IBM, conference
rooms were starting to be in short supply, having been converted to
vm/4341 rooms).
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
some recent posts mentioning coming distributed computing tsunami
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#60 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024f.html#95 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#64 Distributed Computing VM4341/FBA3370
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#87 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#29 Wondering Why DEC Is The Most Popular
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#51 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM OS/360 MFT HASP
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/360 MFT HASP
Date: 04 Jan, 2025
Blog: Facebook
SHARE history of HASP & JES2 (gone 404, but lives on at wayback
machine)
https://web.archive.org/web/20041026022852/http://www.redbug.org/dba/sharerpt/share79/o441.html
Meanwhile, the operating system weenies kept spinning their own
wheels, and so eventually MFT-II and MVT were released. These were
able to do multi-jobbing. Because such a large percentage of the
OS/MFT community was already dependent on HASP, the staff in
Washington adapted HASP to the new OS releases and introduced HASP-II
version 1.
... snip ...
Other trivia: HASP NJE (later morphing into JES2 NJE) originally had
"TUCC" in cols. 68-71 of the assembler source code ... part of the
issue was in used free entires in the 255 entry psuedo device table
for network node definition ... typicall 160-180 entries ... when the
internal (RSCS/VNET) network had long before passed 255 nodes. A NJE
emulation driver was done for RSCS/VNET allowing MVS nodes to be
connected, but they tended to be restriced to boundary nodes (behind
RSCS/VNET) since NJE would discard traffic when either the origin or
destination nodes weren't in local table. While RSCS/VNET had well
structured implementation, NJE had somewhat intermixed networking and
job control fields and NJE traffic from MVS/JES2 had a habit of
crashing destination MVS/JES2 at a different release level. As a
result a large library of code appeared for RSCS/VNET NJE emulation
that could transpose fields for traffic for a directly connected
MVS/JES2. There was infamous case of San Jose origin MVS/JES2 traffic
crashing a Hursley destination MVS/JES2 systems ... and it was blamed
on the Hursley RSCS/VNET (because its NJE emulator hadn't been updated
for the latest San Jose JES2 changes).
I was undergraduate in 60s and had taken a two credit intro to
fortran/computers. The univ. was getting 360/67 for tss/360, replacing
709/1401. The 360/67 arrived within a year of my taking the intro
class and I was hired fulltime responsible for os/360 (tss/360 never
came to fruition so ran as 360/65). Student Fortran had run under
second on 709 (tape->tape, 1401 unit record front-end), but over a
minute on 360/65 OS/360 MFTR9.5 (before first MVTR12). I install HASP
and it cuts time in half. Then I start redoing STAGE2 SYSGEN, to be
able to run in production HASP (instead of starter system) and
carefully place datasets and PDS members to optimize seeks and
multi-track searchers, cutting another 2/3rds to 12.9secs. Student
fortran was never better than 709 until I install Univ. of Waterloo
WATFOR.
While MVT option was available, but next release sysgen did MFTR14. I
didn't do a MVT SYSGEN until combined MVTR15/16 ... which also
included being able to specify cylinder location of VTOC (to reduce
avg. arm seek).
CSC came out Jan1968 to install CP67 (3rd after CSC itself and MIT
Lincoln Labs) and I mostly got to play with it during my dedicated
weekend time (univ shutdown datacenter over the weekend and I had
place dedicated, although 48hrs w/o sleep made Monday classes hard). I
initially rewrite a lot of pathlengths to running OS/360 in virtual
machine. My OS/360 test stream ran 322secs on real machine but
initially 856secs virtually (CP67 CPU 534secs). with a couple months
had CP67 CPU down to 113secs). I was invited to Mar1868 SHARE meeting
for IBM CP67 annnounce (and also participated in 1st SHARE HASP
project).
Related history, early last decade I had been asked to track down
decision to add virtual memory to all 370s and found staff member to
executive making decision, pieces of email exchange (including some
HASP/SPOOL history) in this archived post
https://www.garlic.com/~lynn/2011d.html#73
For MVTR18, I remove 2780 RJE support out of HASP (to reduce real
storage) and put in 2741 and TTY/ASCII terminal support and editor
supporting CP67/CMS EDIT syntax (totaly different code since program
environments so different) for CRJE.
HASP/ASP, JES2/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
other trivia: posts mentioning Boeing Huntsville modify MVTR13 with
virtual memory support
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years
--
virtualization experience starting Jan1968, online at home since Mar1970
John Boyd and Deming
From: Lynn Wheeler <lynn@garlic.com>
Subject: John Boyd and Deming
Date: 05 Jan, 2025
Blog: LinkedIn
re:
https://www.garlic.com/~lynn/2024g.html#64 John Boyd and Deming
https://www.garlic.com/~lynn/2024g.html#74 John Boyd and Deming
https://www.garlic.com/~lynn/2024g.html#103 John Boyd and Deming
80s, when foreign auto manufacturers were setting up factories in the
US heartland, they found that they had to require JR college degree in
order to get workers with high school level education. The finding was
possibly some of the motivation for states to start requiring
proficiency tests for high school diploma ... although there were
press coverage of state legislature battles over whether requiring 7th
grade level math&reading proficiency was too high a qualification for
high school diploma.
c4 task force posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
past posts mention literacy/competency
https://www.garlic.com/~lynn/2017h.html#17 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2017d.html#27 US Education
https://www.garlic.com/~lynn/2012j.html#39 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2009f.html#47 TARP Disbursements Through April 10th
https://www.garlic.com/~lynn/2008k.html#5 Republican accomplishments and Hoover
https://www.garlic.com/~lynn/2007u.html#80 Education ranking
https://www.garlic.com/~lynn/2007k.html#30 IBM Unionization
https://www.garlic.com/~lynn/2003j.html#28 Offshore IT
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 37x5
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 37x5
Date: 06 Jan, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
not assigned to Endicott ... cambridge had joint distributed project
w/endicott to add 370 virtual memory support to cp67 (for emulated 370
virtual machines) ... and then modifications that ran CP67 on 370
hardware (in regular use a year before 1st engineering 370/145 virtual
memory was operational ... and was used as test case for the machine).
also after FS implodes was asked to help with ECPS microcode assist
for 138/148 (also used later for 4300). select 6kbytes highest
executed VM370 kernel pathlengths for reWriting in microcode (for 10:1
speedup), archived post with initial analysis (6kbytes kernel
instructions 79.55% kernel execution time):
https://www.garlic.com/~lynn/94.html#21
Then got coned into running around helping present 138/148 business
case to planners in US regions and World Trade countries.
Then tried to convince corporate to allow VM370 preinstall on every
138/148 (vetoed in part head of POK had recently convinced corporate
to kill vm370 product, shutdown development group and transfer all the
people to POK for MVS/XA ... Endicott did manage to save VM370 product
mission for midrange ... but had to recreate VM370 development group
from scratch)
one of the things going around presenting 138/148 business case
... was in WT, country forecasts turned into plant orders ... and
deliveries made to countries to sell (and held accountable) ... while
US regional forecasts tend to conform to corporate strategic positions
... and any problems fell back on plants (as a result, plants tend to
redo US regional forecasts since they could have little to do with
actual expected sales).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning CP67L (runs on real 360/67), CP67H (run in 360/67
virtual machine, added emulated 370 virtual machine), CP67I (runs on
370 machine), CP67SJ (CP67I with 3330 & 2305 device drivers)
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#71 New HD
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
--
virtualization experience starting Jan1968, online at home since Mar1970
what's a segment, 80286 protected mode
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: what's a segment, 80286 protected mode
Newsgroups: comp.arch
Date: Mon, 06 Jan 2025 17:28:11 -1000
John Levine <johnl@taugh.com> writes:
What you're describing is multi-level page tables. Every virtual
memory system has them. Sometimes the operating systems make the
higher level tables visible to applications, sometimes they don't. For
example, in IBM mainframes the second level page table entries, which
they call segments, can be shared between applications.
initial adding virtual memory to all IBM 370s was similar to 24bit
360/67 but had options for 16 1mbyte segments or 256 64kbyte segments
and either 4kbyte or 2kbyte pages. Initial mapping of 360 MVT to
VS2/SVS was single 16mbyte address space ... very similar to running
MVT in a CP/67 16mbyte virtual machine.
The upgrade to VS2/MVS gave each region its own 16mbyte virtual
address space. However, OS/360 MVT API heritage was pointer passing
API ... so they mapped a common 8mbyte image of the "MVS" kernel into
every 16mbyte virtual address space (leaving 8mbytes for application
code), kernel API call code could still directly access user code API
parameters (basically same code from MVT days).
However, MVT subsystems were also moved into their separate 16mbyte
virtual address space ... making it harder to access application API
calling parameters. So they defined a common segment area (CSA),
1mbyte segment mapped into every 16mbyte virtual address space,
application code would get space in the CSA for API parameter
information calling subsystem.
Problem was the requirement for subsystem API parameter (CSA) space
was proportional to number of concurrent applications plus number of
subsystems and quickly exceed 1mbyte ... and it morphs into
multi-megabyte common system area. By the end of the 70s, CSAs were
running 5-6mbytes (leaving 2-3mbytes for programs) and threatening to
become 8mbytes (leaving zero mbytes for programs)... part of the mad
rush to XA/370 and 31-bit virtual addressing (as well as access
registers, and multiple concurrent virtual address spaces ... "Program
Call" instruction had a table of MVS/XA address space pointers for
subsystems, the PC instruction whould move the caller's address space
pointer to secondary and load the subsystem address space pointer into
primary ... program return instruction reversed the processes and
moved the secondary pointer back to primary).
some recent posts mentioning explision from "common segment" to
"common system" CSA, xa/370, access registers:
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home