List of Archived Posts

2006 Newsgroup Postings (10/23 - 11/02)

Microsoft to design its own CPUs - Next Xbox In Development
Is the teaching of non-reentrant HLASM coding practices ever
Is the teaching of non-reentrant HLASM coding practices ever defensible?
Universal constants
Turbo C 1.5 (1987)
Are there more stupid people in IT than there used to be?
Ranking of non-IBM mainframe builders?
32 or even 64 registers for x86-64?
Root CA CRLs
32 or even 64 registers for x86-64?
Why these original FORTRAN quirks?
Ranking of non-IBM mainframe builders?
Ranking of non-IBM mainframe builders?
VM SPOOL question
32 or even 64 registers for x86-64?
more than 16mbyte support for 370
Is the teaching of non-reentrant HLASM coding practices ever defensible?
old Gold/UTS reference
Why magnetic drums was/are worse than disks ?
old vm370 mitre benchmark
Why these original FORTRAN quirks?; Now : Programming practices
Are there more stupid people in IT than there used to be?
threads versus task
threads versus task
CMSBACK
Are there more stupid people in IT than there used to be?
Universal constants
The Future of CPUs: What's After Multi-Core?
Why these original FORTRAN quirks?
Storage Philosophy Question
Why these original FORTRAN quirks?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
threads versus task
The Future of CPUs: What's After Multi-Core?
Universal constants
The Future of CPUs: What's After Multi-Core?
Are there more stupid people in IT than there used to be?
Vulnerability Assessment of a EAL 4 system
Why these original FORTRAN quirks?
Encryption and authentication
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?
1960s railroad data processing on L&N
To RISC or not to RISC
To RISC or not to RISC
To RISC or not to RISC
To RISC or not to RISC
The Future of CPUs: What's After Multi-Core?
The Future of CPUs: What's After Multi-Core?

Microsoft to design its own CPUs - Next Xbox In Development

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft to design its own CPUs - Next Xbox In Development
Newsgroups: comp.arch
Date: Mon, 23 Oct 2006 09:48:03 -0600
torbenm@app-3.diku.dk (Torben Ægidius Mogensen) writes:
This is more a software issue than a hardware issue, except to the extent that current systems require so tortuous methods to exploit that you can't abstract these into libraries or compilers.

For a next-generation games machine, I think the best idea is to codesign hardware and sofware, i.e., make a language that exploits the hardware at the same time as you make the hardware, and make both be aware that the other exist, so you don't make it impossible to compile to the hardware nor make the language impossible to implement efficiently.


aka it is a system issue ... and the semantics of the different pieces. right now there is some assertions that the semantics of various existing hardware features are sufficiently complex and/or obtuse that it is nearly impossible to develop software that achieves the desired performance and real-time presentation.

some of this is matching the semantics and the overall infrastructure that needs to concurrently manage a large number of different processes/operations in highly parallelized environment ... in several cases, extremely fine-grain parallel, concurrent operations. this has something of the flavor of the oldtime high performance processors that needed highly tuned and highly specialized horizontal microcoding (and the execution/work units may span a number of different chips).

re:
https://www.garlic.com/~lynn/2006s.html#62 Microsoft to design its own CPUs - Next Xbox In Development
https://www.garlic.com/~lynn/2006s.html#63 Microsoft to design its own CPUs - Next Xbox In Development

Is the teaching of non-reentrant HLASM coding practices ever

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l,alt.folklore.computers
Date: Mon, 23 Oct 2006 11:27:34 -0600
gilmap writes:
Is it necessary to round to a page boundary, or only to a cache line boundary? (Either is subject to architectural change.)

what is the objective? ... things like cache line thrashing across different processors ... or use of page-oriented protection mechanisms (of course it is usually transparent if program loading rounds up to a larger storage aligned increment).

it used to be that storage key based protection mechanism was 2k ... but with 3081 and 370-xa ... 2k storage keys were dropped in favor of 4k storage keys.

and as mentioned ... cms had shared pages in cp67 ... which was oriented around r/o protection on page basis (using a hack involving 2k storage key protection). in the morph of cms for vm370, some amount of cms was reworked to align with the segment sharing (across address spaces) and segment protection facility in the original 370 virtual memory architecture. the segment protection (and other) features of the original 370 virtual memory architecture got dropped as part of buying six months schedule time for 370/165 hardware implementation (and while the cms programing model retained the shared-segment alignment orientation ... the underlying protection mechanism used by vm370 reverted to the storage key based protection mechenism used by cp67),

and then, of course, there is the whole os/360 heritage of relocatable adcons ... not only do all the relocable adcons need to be swizzled as part of program loading ... but it is further aggravated by relocatable adcons being intermixed with program code and data (as aside, cms leverages uses of the os/360 derived assemblers, compilers, conventions, etc ... however, tss/360 had a much better convention for the handling of program address constants in a virtual memory environment).

various recent postings mentioning the relocatable adcon issue
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of the swallaw (was Real core)
https://www.garlic.com/~lynn/2006s.html#61 Is the teaching of non-reentrant HLASM coding practices ever defensible?

and lots of past posts mentioning relocatable adcon issue
https://www.garlic.com/~lynn/submain.html#adcon

Is the teaching of non-reentrant HLASM coding practices ever defensible?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever defensible?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 23 Oct 2006 13:34:09 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
for some trivia ... one of the people in the following meeting claimed to have been primary person handling sql/ds technology transfer from endicott back to stl for db2
https://www.garlic.com/~lynn/95.html#13


re:
https://www.garlic.com/~lynn/2006s.html#61 Is the teaching of non-reentrant HLASM coding practices ever defensible?

the meeting was primarily to focus on cluster scale-up for our (ibm) ha/cmp product (and possible relationship between cluster scale-up and parallel oracle)
https://www.garlic.com/~lynn/subtopic.html#hacmp

for completely other triva ... two of the other people mentioned in the same meeting later showed up in a small client/server startup and responsible for something they called the "commerce server". at that time, we were called in as consultants on something called payment gateway ... because they wanted to process financial transactions on servers
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

the small client/server startup had this technology called https (or ssl) and the payment stuff is frequently now referred to as electronic commerce.

a couple other recent posts on some (even older) networking activity
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#51 Ranking of non-IBM mainframe builders?

and then there are these two
https://www.garlic.com/~lynn/internet.htm#0
https://www.garlic.com/~lynn/internet.htm#31

Universal constants

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Universal constants
Newsgroups: alt.folklore.computers
Date: Mon, 23 Oct 2006 18:11:13 -0600
for completely other drift .... 30 min. program on Legends of Airpower just started on the Military (cable) channel ... this program on John Boyd.

misc. Boyd URLs from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2

and misc. of my past posts mentioning John Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

Turbo C 1.5 (1987)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Turbo C 1.5 (1987)
Newsgroups: alt.folklore.computers
Date: Mon, 23 Oct 2006 18:51:16 -0600
Al Kossow <aek@spies.com> writes:
I would suggest using Imagedisk
http://www.classiccmp.org/dunfield/img/

If you need a drive, I should be able to loan one to you, or if they are of historical interest, I could read the floppies for you at the Museum.


any directions on how to help imagedisk find the floppy controller ... it just hangs on startup saying it can't find any floppy drives.

i put a 1.2m 5.25in teac drive in an old pentium pro machine who's scsi hard disk had long since died ... however, the floppy cable did have edge connector for the teac drive ... along with 3.5in floppy reader. i've got a number dos4 and dos6 3.5in bootable diskettes and was able to read numerous 360kbyte diskettes on the B: drive.

however, i haven't been able to read/process any of the 80trk 10sector diskettes as well as half dozen 360kbyte diskettes. i was hoping that imagedisk would be able to do the trick ... but imagedisk apparently can't find the floppy controller to directly access.

the dos systems haven't had a problem and i've also been boot and run knoppix (from the scsi cdrom drive).

Are there more stupid people in IT than there used to be?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are there more stupid people in IT than there used to be?
Newsgroups: alt.folklore.computers
Date: Mon, 23 Oct 2006 21:23:41 -0600
winston19842005 writes:
All the problems with data being lost, like your credit card numbers.

People taking laptops home with information that was supposed to be secure.


in the early 80s, there was decision to allow off-site/remote access via portable terminals. one of the vulnerabilities identified as part of the effort was extreme risk of most hotel PBX. as part of that, it was decided to build a custom encrypting modem for internal corporate use ... and no offsite/remote access would be allowed w/o the appropriate encrypting capability.

i found it quite remarkable the number of operations allowing offsite/remote access in the 90s up thru the present day that don't use even the simplest security measures. if so many organizations weren't even encrypting offsite/remote communication ... why would you expect that they would provide protection for data actually on the laptops????

for other drift ... there is my old posting on security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

and post with quite a few references to data breaches and other data loss scenarios
https://www.garlic.com/~lynn/aadsm25.htm#24 DDA cards may address UK Chip&Pin woes

and collection of posts on various associated vulnerabilities and exploits
https://www.garlic.com/~lynn/subintegrity.html#yescard

note that the x9a10 financail standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all related payments in conjunction with the work on the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

part of the work eliminated risk associated with account numbers for x9.59 transactions ... i.e. x9.59 didn't do anything about prevent data breaches associated with account numbers
https://www.garlic.com/~lynn/subintegrity.html#harvest
https://www.garlic.com/~lynn/subintegrity.html#secrets

however, it eliminated any risks associated with attackers obtaining (x9.59 related) account numbers ... and since the risks were eliminated ... then based on security proportional to risks ... the security issues were significantly mitigated.

misc. past posts mentioning corporate encrypting modems for offline/remote access
https://www.garlic.com/~lynn/aadsm14.htm#1 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm22.htm#16 serious threat models
https://www.garlic.com/~lynn/aadsm25.htm#34 Mozilla moves on security
https://www.garlic.com/~lynn/aepay11.htm#37 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2003i.html#62 Wireless security
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2006p.html#35 Metroliner telephone article

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Tue, 24 Oct 2006 11:41:04 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
note that it was external visibility like this that got HSDT into political problems with the communication group ... in part, because there was such a huge gap between what they were doing and what we were doing.

re:
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?

some other background ... part of the history leading up to the letter from NSF being sent to corporate hdqtrs (mentioned in the referenced 4/17/86 email in the previous post) events had significant overtones involving corporate political infighting

Date: 05/06/85 16:41:57
From: wheeler

I noticed in this months sigops a quicky synopsis of Project Admiral ... I've been making presentations on HSDT recently (it has been pitched to NSF as a backbone to tie together all the super computer sites). Last week in Boulder at the National Center for Atmospheric Research, one of the people mentioned that Project Admiral and HSDT will be managing the satellite channel simarily ... do you have more detailed info?

BTW ... I'll probably be over in Europe during July. People at NSF ask that HSDT also be pitched to the european network (EARN).


... snip ... top of post, old email index, NSFNET email

Date: 09/30/85 15:34:06
From: wheeler

I'm meeting with NSF on weds. to discuss the implementation for tieing together the super-computer centers. I'll be back in Milford next week to present the NSF project status. I've been asked to get together with Univ. of Cal. by the end of the month to discuss their use for tieing together the UofC campuses.


... snip ... top of post, old email index, NSFNET email

Erich Bloch was director of the National Science Foundation for much of the 80s.

Date: 04/07/86 10:06:34
From: wheeler

re: hsdt; I'm responsible for an advanced technology project called High Speed Data Transport. One of the things it supports is a 6mbit TDMA satellite channel (and can be configured with up to 4-6 such channels). Several satellite earth stations can share the same channel using a pre-slot allocated scheme (i.e. TDMA). The design is fully meshed ... somewhat like a LAN but with 3000 mile diameter (i.e. satellite foot-print).

We have a new interface to the earth station internal bus that allows us to emulate a LAN interface to all other earth stations on the same channel. Almost all other implementations support emulated point-to-point copper wires ... for 20 node network, each earth station requires 19 terrestrial interface ports ... i.e. (20*19)/2 links. We use a single interface that fits in an IBM/PC. A version is also available that supports standard terrestrial T1 copper wires.

It has been presented to a number of IBM customers, Berkeley, NCAR, Erich Bloch and Dick Jennings at NSF and a couple of others. NSF finds it interesting since 6-36 megabits is >100* faster than the "fast" 56kbit links that a number of other people are talking about. Some other government agencies find it interesting since the programmable (full bandwidth) encryption interface allows crypto-key to be changed on a packet basis. We have a design that supports data-stream (or virtual circuit) specific encryption keys and multi-cast protocols.

We also have normal HYPERChannel support ... and in fact have done our own RSCS & PVM line drivers to directly support NSC's A220. Ed Hendricks is under contract to the project doing a lot of the software enhancements. We've also uncovered and are enhancing several spool file performance bottlenecks. NSF is asking that we interface to TCP/IP networks.


... snip ... top of post, old email index, NSFNET email

lots of other posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

misc. posts mentioning bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

post referencing old email about setting up earn:
https://www.garlic.com/~lynn/2001h.html#65 UUCP email

32 or even 64 registers for x86-64?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 32 or even 64 registers for x86-64?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 25 Oct 2006 09:54:50 -0600
"ranjit_mathews@yahoo.com" <ranjit_mathews@yahoo.com> writes:
Elcaro Nosille wrote: > Today's RISC CPUs don't have a minimized set of instructions. The only > difference to CPUs we called CISC is, that the're pure load/store-archi- > tectures, i.e. they don't have operations with memory-operands except > from load or store.

... but then, "reduced" doesn't mean minimized. RISC instructions are reduced to the extent of removing 1-3 address operations with operands in memory.

Be that as it may, RISC was as much a marketing term as an architectural description. Before the Spectrum architecture was renamed to PA-RISC, did HP call Spectrum a RISC architecture? Did Apollo use "RISC" to refer to their PRISM architecture (which was nipped in the bud by HP when they took over Apollo)?


mention of risc from long ago and far away ...

Date: 29 September 1982, 10:46:37 EST
To: wheeler

Hm, interesting article you sent....

I obtained advance information about the Intel iAPX286 .... it looks very interesting.... also have some data on the iAPX 86/30 and 88/30 Operating system processors.... this looks extremely interesting, and one can assume that the 286 cannot be far behind in getting an operating system chip to go with it....

iAPX86/30 is a 2 chip processor, with 35 operating system processor primitives as instructions... things like job and task management, interrupt management, free memory management, intertask communication, intertask synchronization, and environmental control... It also supports 5 operating system data types: jobs, tasks, segments, mailboxes, and regions. Someday we'll be able to look back at the big RISC vs. CISC and wonder what all the fuss was about....


... snip ... top of post, old email index

a couple previous posts with copy of the above email
https://www.garlic.com/~lynn/2006p.html#15 25th Anniversary of the Personal Computer
https://www.garlic.com/~lynn/2006p.html#19 Wat part of z/OS is the OS?

and nearly a year earlier

Date: 12/08/81 21:53:49
To: wheeler

Re: Complexity

Agreed, decomposition is ONE solution to solving complex problems. Another widely accepted solution is sucessive approximation. The principle there is that you attempt a cheap solution as an approximation, observe the results and interate to produce a better solution. This method has been used in the creation of the S/370 architecture, among other things. It is already an established mathematical and engineering procedure for establishing optimality.

In the 801, quite a bit has been pushed onto the code/compiler/operating system. As a first approximation, that's not a bad idea, since it is relatively cheaper to change the operating system than it is to change the machine. I claim that after iterating on the operating system a while, the proper concepts can be reasonably put into silicon. It is just too damn expensive to go out and try to do it right the first time. I believe that the iapx432 will fail for that reason. I can't believe that they are going to get it right so that they can get into mass production until after the development budget has completely blown any profits they may make.

Whatever Yorktown's reasons, a simple computer isn't a bad idea. Seymore Cray has made one hell of a lot of hay on that idea. Guys at Berkeley have ripped off the basic 801 idea(simple computer) and built RISC(Reduced Instruction Set Computer) in FET which they claim will outrun most mini's in the world. Complexity in a machine architecture SLOWS IT DOWN. Let's find the complexity in the software, and then move it to the hardware as necessary on further iterations.


... snip ... top of post, old email index

wikipedia reference to berkeley risc
https://en.wikipedia.org/wiki/Berkeley_RISC

lots of past postings mentioning 801, romp, rios, fort knox, power, somerset, etc
https://www.garlic.com/~lynn/subtopic.html#801

as an aside, i've previously mentioned being at an (internal) advanced technology conference early 77 doing presentation on 16-way smp design ... where the 801 group was presenting 801 (and getting into argument about how to handle shared segments in virtual memory architecture). as much as the hardware design was the compiler (pl.8) and operating system (cp.r) ... since, in effect, a lot of hardware stuff was being moved into software.

my view was that some amount of the motivation for 801 was in reaction to the failed future system effort
https://www.garlic.com/~lynn/submain.html#futuresys

and to try and do as nearly as possible, the exact opposite, in terms of hardware complexity.

and as to the mentioned subthread on complexity (and moving complex software into hardware) ... my somewhat counter argument from the period ... was that at lot of software complexity (especially kernel) was spaghetti patched mess that possibly had evolved over a decade (or more). the original people that understood the purpose for the code are no longer around and what is left is a lot of incremental patching activity.

benefits of redoing in hardware comes as much from just the redoing (and figuring out what is really going on) as comes from moving anything into hardware.

what you did find was that some of the people that had worked on early 801, going to both amd (29k?) and hp (snake?)

there is some folklore that one of the people that gave notice that they were leaving for HP ... then spent their remaining 2weeks in the los gatos lab doing some final work on blue iliad (early 32bit 801 chip).

the aix unix workstation, pc/rt ... with (an 801) ROMP chip ... originally started at as an OPD project for a follow-on to the displaywriter. when that project was killed, there was some looking around and eventual decision to retarget the machine to the unix workstation market. the company that had done the at&t unix port to the ibm/pc for pc/ix was hired to do a similar port to what was going to "aix". cp.r had to be scrapped ... however the group writing the pl.8 displaywriter software was retargeted to implement a virtual machine abstraction (VRM) in pl.8. the company doing the unix port, was directed to do it to the VRM interface (rather than the bare hardware).

recent post mentioning vrm:
https://www.garlic.com/~lynn/2006r.html#49 Seeking info on HP FOCUS (HP 9000 Series 500) and IBM ROMP CPUs from early 80's

Root CA CRLs

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Root CA CRLs
Newsgroups: microsoft.public.security
Date: Wed, 25 Oct 2006 11:56:30 -0600
Seeker <newsgroups@minusthespam.michaelstarks.com> writes:
I'm setting up a two-tier PKI hierarchy. The root will be offline and will sign the issuing CA certificate. What is the best-practice for the root Certificate Revocation List and revoking the root certificate?

Should I immediately revoke the root certificate after creating the issuing CA and store it in a secure location in case the passphrase is lost?

Should I create a certificate revocation list from the root or only on the issuing CA? I certainly don't want to have to retrieve the root authority to update the list, but will the clients handle this OK if the root public key is in the browser but the issuing CA revokes certificates and publishes the list? I would think so, since the chain should be intact.


scrap PKIs, certification authorities, and certificate revocation list ... and migrate to real-time, online infrastructure.

the original kerberos pk-init ... misc. past posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos

started out with a certificate-less public key infrastructure ... the registration authority (which is common to lots of business processes) just registered the public key in lieu of a password ... w/o having to issue a certificate ... misc. past posts
https://www.garlic.com/~lynn/subpubkey.html#certless

so entities can connect/login just thru kerberos digital signature authentication protocol ... where the digital signature is directly verified by doing real-time access to the registered public key.

note there was also a similar certificate-less done for radius ... again, w/o having to resort to any of the PKI, certification authority, and/or certificate revocation list stuff.
https://www.garlic.com/~lynn/subpubkey.html#radius

for the SSL, server authentication/encryption scenario ... misc. past posts
https://www.garlic.com/~lynn/subpubkey.html#sslcert

the scenario involved registering public keys with the domain name infrastructure ... and doing real-time retrieval of public keys as part of the existing domain name infrastructure protocols (even piggyback in existing transmission).

the issue here was that the PKI/CA business operations were already somewhat advocating registration of public keys with the domain name infrastructure ... as a countermeasure to some integrity issues that the SSL domain name certificate operations have ... aka as part of a ceritification authority certifying a SSL domain name certificate operations ... they have to validate that they are dealing with the true domain name owner. they currently require a lot of "identification" information ... which then they cross check with the registered identification for the domain name owner on file with the domain name infrastructure. having the domain name owner register a public key helps eliminate some vulnerabilities associated with who the domain name owner is.

this does represent something of a catch-22 for the SSL domain name certificate operation. With a public key on file for the domain name owner, they can now require that all SSL domain name certificate applications be digitally signed. They can then do a real-time (certificate-less) retrieval of the public key from the domain name infrastructure for validating the digital signature. This turns an expensive, error-prone, and time-consuming identification process into a much simpler, less-expensive, and more reliable authentication process (as part of due diligence related to the SSL domain name certificate application).

the catch-22 issues are:

1) SSL domain name certificates were originally justified (in part) based on integrity issues with the domain name infrastructure. improving the overall integrity of the domain name infrastructure reduces some of the originally justification for having SSL domain name certificates

2) if the SSL domain name certificate operations can do real-time retrieval of public keys for their purposes (basically it becomes the trust root for the certification process) ... then it is possible that the rest of the world could also start doing real-time retrieval of public keys from the domain name infrastructure ... eliminating the need for SSL domain name certificates at all (aka directly using the same trust root that is the foundation for the SSL domain name certificates)

misc. past posts mentioning the catch-22 issue for the SSL domain name certificate business related to their desire to have public keys registered with the domain name infrastructure ... and being able to do real-time, certificate-less, public key retrievals.
https://www.garlic.com/~lynn/subpubkey.html#catch22

32 or even 64 registers for x86-64?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 32 or even 64 registers for x86-64?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 25 Oct 2006 16:10:17 -0600
re:
https://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?

in the following, "MLS" refers to the 16-way SMP mentioned in the previous post. VAMPS was a 5-way smp project that preceded MLS ... misc. collected posts mentioning VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

Date: 11/28/78 14:45:51
From: wheeler

I can wait. Remember the '801' cpu from Yorktown? There was a a presentation on it at that ad-tech meeting in POK when we presented MLS. Well it is being fabricated in Los Gatos and they are looking for applications for it. They are interested in programming 370 instruction set and running VM on it. There is a GPD ad-tech group located here in bldg. 28 looking at system support. A lot of things in VAMPS and MLS look like possibilities.


... snip ... top of post, old email index

Date: 06/07/79 07:48:11
From: wheeler

The 801 seems to be the only game in town. The hardware architecture is much closer to vertical microcode than machine language (there has even been suggestion that 370 be programmed on it, which would make the 801 microcoded).


... snip ... top of post, old email index

one of the suggested purposes for 801s was to replace the myriad of internal microprocessors ... including some number of microprocessors used in 370 implementations. misc. posts mentioning microprocessor microprogramming
https://www.garlic.com/~lynn/submain.html#mcode

old email that i've posted before mentioning 801:

Date: 79/07/11 11:00:03
To: wheeler

i heard a funny story: seems the MIT LISP machine people proposed that IBM furnish them with an 801 to be the engine for their prototype. B.O. Evans considered their request, and turned them down.. offered them an 8100 instead! (I hope they told him properly what they thought of that)


... snip ... top of post, old email index

note: 8100 system used a totally different microprocessor (I believe uc.5) that was in no way related to 801.

and for something completely different, 801's pl.8 compiler (using pascal language front end and both 68k and 370 backend code generation)

Date: 8 August 1981, 16:47:28 EDT
To: wheeler

the 801 group here has run a program under several different PASCAL "systems". The program was about 350 statements and basically "solved" SOMA (block puzzle..). Although this is only one test, and all of the usual caveats apply, I thought the numbers were interesting... The numbers given in each case are EXECUTION TIME ONLY (Virtual on 3033).


6m 30 secs               PERQ (with PERQ's Pascal compiler, of course)
4m 55 secs               68000 with PASCAL/PL.8 compiler at OPT 2
0m 21.5 secs             3033 PASCAL/VS with Optimization
0m 10.5 secs             3033 with PASCAL/PL.8 at OPT 0
0m 5.9 secs              3033 with PASCAL/PL.8 at OPT 3

... snip ... top of post, old email index

Date: 11/04/81 08:48:39
To: wheeler
From: somebody in Endicott

Lynn,
Somebody suggested that you may know of someone that has done a PL1 version of CP. We are interested in any information you may have on the subject.
We are currently looking for a control program that could be run natively on an Atlantic machine (801 processor from Yorktown). We have access to a PL1-like compiler which we can generate 801 code with. If such a PL1 version of CP existed it may save us some recoding of existing function. We are still in a hardware position where trade-offs can be made to support the control program. It is sufficient for the near term that the virtual machine support be S/370 only. This would probably change later.


... snip ... top of post, old email index

aka Endicott was looking at using an "801" as the microprocessor in the follow-on to the 4341. I had done a program that analyzed 370 assembler listings, register usage, code flow, and instructions ... and then generated a program representation in pli-like psuedo code. the reference in the above to "PL1-like" is, of course pl.8. the idea behind this email, was not only use the 801 as the microprocessor for emulating 370s ... but to also take the whole vm370 virtual machine kernel and drop it into native 801 code.

this would have been somewhat analogous to the current generation of 370 virtual machine supervisors implemented on i86 platforms.

Why these original FORTRAN quirks?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers
Date: Wed, 25 Oct 2006 16:15:28 -0600
Joe Morris <jcmorris@mitre.org> writes:
A similar idea reportedly earned an IBM FE a big bonus back in the days of OS/360: he designed a template to be printed on transparant acetate that could be laid down on a hex dump of a VTOC. For each of the various flavors of DSCB it marked the location of each field and its name, making it a trivial task to chase down the bytes that you didn't need often enough to memorize their location.

Given the number of records in a VTOC and the amount of information they contained, a formatted listing would have been quite long; the VTOC overlay gave you most of the info you needed but required much less paper.


much later with dumprx (nearly all written in rex)
https://www.garlic.com/~lynn/submain.html#dumprx

i had a format command where you could point at a starting location in a dump image and a "DSECT" library member ... and it would interpret the DSECT source on the fly and format the storage according to the specified DSECT.

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Wed, 25 Oct 2006 16:51:22 -0600
re:
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe builders?

other HSDT email from long ago and far away

Date: 04/18/84 11:27:41
To: wheeler

There is a T1 circuit currently scheduled by NY TEl for installation 12/84 from Yorktown to West 1. This was the missing link in the t1 circuit from IBM Kingston to Yorktown. NY Tel now indicates that they believe they can improve the installation date to 9/1/84. INET's interest in doing that, and in fact, installing the circuit at all at this time may depend on HSDT requirements.


... snip ... top of post, old email index, HSDT email

the April email is concerning an on order T1 link between Yorktown (NY) and IBM Kingston (NY) and informing that the installation date might possibly be improved from December to September.

misc. past posts mentioning HSDT project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Ranking of non-IBM mainframe builders?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ranking of non-IBM mainframe builders?
Newsgroups: alt.folklore.computers
Date: Wed, 25 Oct 2006 21:00:22 -0600
re:
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#11 Ranking of non-IBM mainframe builders?

and another old email related to hsdt

Date: 11/21/84 08:55:26
From: wheeler

re: hsdt; oh, almost forgot. I'm having another meeting with the Berekely ten meter telescope people (this time just the IBMers working with them). They want to set-up for remote observing (observatory will be about 14,000 foot level in Hawaii) from both "local" sea level and eventually the mainland. Current estimates are that the digitized image represents about 800kbits of data during the evening hours (data flow is asymetrical with telescope control commands going in the opposite direction only about 1200 baud).

Because of numerous reasons (staffing, support, etc.), they would like to limit the hardware at the site to PC/AT scale machine (customer field replacable units ... service call would be extremely time consuming and loose valuable observing time).


... snip ... top of post, old email index, NSFNET email

a couple other posts on the subject (including copies of old email)
https://www.garlic.com/~lynn/2004h.html#7 CCD technology
https://www.garlic.com/~lynn/2004h.html#8 CCD technology

and recent news article about there being some damage in the recent earthquake:
http://www.keckobservatory.org W. M. Keck Observatory
http://www.keckobservatory.org/news.php W. M. Keck Observatory News W. M. Keck Observatory News
http://www.keckobservatory.org/article.php?id=95

VM SPOOL question

Refed: **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VM SPOOL question
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 25 Oct 2006 22:29:42 -0600
lynn writes:
for a little drift ... also as an undergraduate i complete redid the page replacement algorithm ... implementing global LRU based on page reference bits ... lots of past postings discussing page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#wsclock


re:
https://www.garlic.com/~lynn/2006s.html#25 VM SPOOL question

for additional drift ... another old email mentioning page replacement work:

Date: 6/22/92 19:57:09
From: wheeler

Last year there was some sort of announcement from Amdahl regarding Huron ... describing it as a hierarchical database system ... part of something look like it was targeted along the same lines as ad/cycle.

at a meeting in boston on friday, i ran into consultant/professor (who i had known from the early '70s) that mentioned he had designed the Huron database system and was working with Amdahl on the implementation.

the actual context of the discussion started out on LRU replacement algorithms and a paper he and two other professors were writing on the subject. I had been responsible for possible the original implementation of "clock" algorithm in the late '60s (which he had known). the point of the discussion was that i had also developed a different class of LRU replacement algorithms during the early 70s that had some interesting advantages over clock. specifically both "clock" and this hybrid are approximations to a "true" LRU replacement (i.e. the time of the most recent reference to every object in memory could be exactly determined ... and all the objects could be exactly ordered by the exact/true reference information).

Doing detailed trace-driven simulation studies, "clock" would be measured as coming within X% (where X is typically in the 2-10 range) of the performance of true LRU (i.e. almost as good as). The interesting thing about the hybrid was that it was possible to find a version of the hybrid that was 5-10% better than true LRU.

In any case, he had thought that his work on LRU "clock" like replacement algorithms could significantly improve the performance of the Huron buffer cache manager. He had forgotten about this other work I had done on hybrid replacement algorithm. In any case, he became interested in whether I would review and/or even possibly contribute to the paper.

As to Huron database, he said that it was NOT a hiearchicial implementation and had asked Amdahl to not describe it as such. Some of the details sounds something like what Atherton went thru starting with a RDBMS and then evolving to a RYO for unstructured real-world data. Claim is that Huron can handle relational queries but also maintains order (w/o having to sort on sequence field ... especially evolving mega-entries) and can do direct links (w/o join overhead), as well as not having to recompile apps when schemas change.

In any case, is there anybody out there that already has detailed description of Huron database implementation?


... snip ... top of post, old email index

32 or even 64 registers for x86-64?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 32 or even 64 registers for x86-64?
Newsgroups: alt.folklore.computers
Date: Thu, 26 Oct 2006 17:05:45 -0600
Peter Flass <Peter_Flass@Yahoo.com> writes:
Lynne, your posts are always interesting, but a little more context would be nice here, I don't quite get what you are saying:-(

the thread started in comp.arch ... issue somewhat about risc vis-a-vis cisc ... as well as early use of the term risc (somewhat with regard to HP circa 1989).

in my previous post
https://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64?

i had referenced my original post in the thread:
https://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?

which included email from 1981
https://www.garlic.com/~lynn/2006t.html#email810812
and 1982
https://www.garlic.com/~lynn/2006t.html#email820929
where the term "risc" was used ... somewhat in conjunction with 801. I had also added alt.folklore.computers ... in part because of the included references from the early 80s.

the referenced 1981 email included somebody's comment that possibly Berkeley had "ripped" off the basic idea for risc from 801.

in the previous post that you referenced
https://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64?

I followed up with additional email from 1978,
https://www.garlic.com/~lynn/2006t.html#email781128

1979,
https://www.garlic.com/~lynn/2006t.html#email790607
https://www.garlic.com/~lynn/2006t.html#email790711

and 1981
https://www.garlic.com/~lynn/2006t.html#email811104

that also included 801 references.

... aka RISC dates back to earlier than HP circa 1989 ... or Berkeley in the early 80s ... but to original 801 from the mid-70s.
https://www.garlic.com/~lynn/subtopic.html#801

which I've frequently commented was possibly a reaction to the failed future system project
https://www.garlic.com/~lynn/submain.html#futuresys

where 801 appeared to take nearly the opposite tack from that of the future system project with regard to hardware complexity.

and just for the fun of it, other posts that include early emails that mention 801
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
https://www.garlic.com/~lynn/2006c.html#3 Architectural support for programming languages
https://www.garlic.com/~lynn/2006o.html#45 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email

more than 16mbyte support for 370

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: more than 16mbyte support for 370
Newsgroups: alt.folklore.computers
Date: Fri, 27 Oct 2006 09:06:29 -0600
for another old email ... here is comment on proposal for adding >16mbyte real storage to 370 (originally 3033, but also used with 3081s and 3090s in 370 mode)

Date: 01/21/80 11:39:17
From: wheeler
To: somebody in Endicott

I got a call at home from YKT telling me about POK processor plans are slipping but they want to maintain revenue flow. Proposal is to offer >16meg real storage /processor. Use of additional storage would only be via use of current "Must BE" Zero bits in existing page table entries (i.e. two additional address lines from the PTE would allow addressibility thru 64meg., instruction decode/addressibility, etc. remain unaltered and limited to 24bit addresses). POK/VM proposal was to limit all of CP and control blocks to <16meg. Virtual pages would only be allowed >16meg. Any CP instruction simulation encountering page >16meg. would result in page being written out to DASD and read back into storage <16meg. Flag would also be set in SWPFLAG for that page indicating that in the future that page could only be page into addresses <16meg (major problem they overlooked, other than the obvious overhead to do the page out / page in, is that after some period of time, most pages would get the <16meg flag turned on.

I countered with subroutine in DMKPSA of about 25-50 instructions which is supplied real address in CP control block (<16 meg), real address in virtual page (possibly >16meg), and length. Subroutine would 'insert' real addresses in two available PTEs in CP's virtual address tables. It would then enter translate mode, supervisor state, perform an MVCL and then revert to non-translate mode (I had originally created CP virtual address space control blocks in cp67 for paging portions of the cp67 kernel, implemention that later shipped in vm370).

No page out/page in, and no creeping overhead problem where most pages eventually get the <16meg flag turned on. Also if special case MVCL was ever created to handle >16meg. addresses it would be a very small hit to the subroutine only.

It also has the attraction that access to virtual machine storage is concentrated in one place. It makes it much simpler to modify large sections of CP to run in relocate mode all of the time. Movement of most CP code to 'psuedo' virtual machine/ virtual address space leaves something behind which is much more nearly containable entirely in microcode.


... snip ... top of post, old email index

this also was somewhat intertwined with comparisons of clusters of 4341 that had aggregate greater thruput and less cost than 3033. part of this was that 4.5mips 3033 could only be configured with 16mbytes real storage ... while six 4341s (easily aggregate six mips) could have an aggregate of 96mbytes real storage (@16mbytes each) ... for about the same cost. this was in the era when real storage was increasingly being used to compensate for the lack of disk relative system thruput (i.e. systems were getting faster and disks were getting faster, but the rate that disks were getting faster was much less than the rate that cpus and memories were getting faster).

misc. posts from this year mentioning 370 >16mbyte real storage:
https://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006l.html#2 virtual memory
https://www.garlic.com/~lynn/2006m.html#27 Old Hashing Routine
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006p.html#0 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

misc posts from this year mentioning 4341 clusters:
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006l.html#2 virtual memory
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006p.html#0 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006s.html#41 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

misc. posts from this year mentioning the declining relative system thruput of disks:
https://www.garlic.com/~lynn/2006.html#4 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#1 using 3390 mod-9s
https://www.garlic.com/~lynn/2006j.html#2 virtual memory
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006o.html#65 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

misc. posts from this year moving nearly all of vm370 spool file support itno virtual address space:
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
https://www.garlic.com/~lynn/2006o.html#64 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006p.html#28 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006q.html#27 dcss and page mapped filesystem
https://www.garlic.com/~lynn/2006s.html#7 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#25 VM SPOOL question

Is the teaching of non-reentrant HLASM coding practices ever defensible?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is the teaching of non-reentrant HLASM coding practices ever defensible?
Newsgroups: alt.folklore.computers
Date: Fri, 27 Oct 2006 10:00:22 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
this also caused some perturbation in the original relational/sql implementation (all done as a vm370-based implementation).
https://www.garlic.com/~lynn/submain.html#systemr

where there was going to be some (virtual address space/)processes that had r/w access to the data ... but there was design that had application with access to some of the same data ... only unable to change that data. it was ideally designed to take advantage of the original 370 virtual memory architecture segment protection. however, the implemention then required some amount of fiddling for release as sql/ds.

for some trivia ... one of the people in the following meeting claimed to have been primary person handling sql/ds technology transfer from endictt back to stl for db2
https://www.garlic.com/~lynn/95.html#13


re:
https://www.garlic.com/~lynn/2006s.html#61 Is the teaching of non-reentrant HLASM coding practices ever defensible?

... in the following old email, DWSS was part of the original technology transfer of system/r to endicott for sql/ds (vm370 support for r/w, "unprotected" shared segments)

Date: 02/27/80 08:37:42
From: wheeler
To: numerous people in Endicott

re: yesterday's protect bit discussion.

It does make a difference whether the protect bit is in the page table entry or segment table entry (page table entry bit , I think may have originated MVS 811 hardware to provide them selective storage protection in addition to don't trash/stomp on page zero for system protection).

Segment table entry protect bit provides selective protection for some users and the possibility of no protection for others, i.e. some virtual address space have R/W shared segments (DWSS) and others only have R/O access. Segment table protect bit was also defined in the original 370 architecture.


... snip ... top of post, old email index

other refs in this thread
https://www.garlic.com/~lynn/2006s.html#64 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006t.html#1 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006t.html#2 Is the teaching of non-reentrant HLASM coding practices ever defensible?

collected posts mentioning original relational/sql implementation,
https://www.garlic.com/~lynn/submain.html#systemr

old Gold/UTS reference

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: old Gold/UTS reference
Newsgroups: alt.folklore.computers
Date: Fri, 27 Oct 2006 10:38:32 -0600
so the old trivia about why Amdahl Unix was referred to as "gold" before it was announced as UTS?

note that ibm was doing tss/370 rpq for bell ... ssup. this was all of the tss/370 ui/api stripped away and unix interfaced to low-level tss/370 kernel.

Date: 03/17/80 08:42:26
From: wheeler

Talked to people from Amdahl for two hours about UNIX. They now have a PHD who started working on putting up UNIX under VM while at Princton. It is now up and running at Amdahl production. Claim is that it represents more of a load on the VM system than either SCPs or CMS.

Somewhat simpler than CMS. By convention and/or design system tends to two and three letter commands w/o arguments. There might be 20 or 30 versions of a program like copyfile and since the possible two letter commands starting with 'C' is limited there would be a tendency to use any available two letter combination rather than go to 3 or 4 letters.

Comment is that it has become something a 'cult' among computer science graduates (at least one of the people had used UNIX for 4-5 years in college) and fealing was that the high use of UNIX at Amdahl (over CMS) wasn't really justified. However from Bell's stand point, it's availability across the line of computers is very attractive.
... snip ... top of post, old email index

previous post mentioning ssup/unix and uts ... including email from '84 talking about benchmark comparing ssup/unix and uts
https://www.garlic.com/~lynn/2006e.html#31 MCTS

other posts mentioning ssup/unix and/or uts:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001e.html#19 SIMTICS
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2001l.html#20 mainframe question
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
https://www.garlic.com/~lynn/2002m.html#24 Original K & R C Compilers
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003g.html#24 UltraSPARC-IIIi
https://www.garlic.com/~lynn/2003g.html#31 Lisp Machines
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003i.html#53 A Dark Day
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#16 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005b.html#13 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005c.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#61 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005m.html#4 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005m.html#7 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005q.html#26 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005s.html#34 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2006b.html#24 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2006e.html#33 MCTS
https://www.garlic.com/~lynn/2006f.html#26 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2006f.html#27 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2006f.html#28 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2006m.html#30 Old Hashing Routine
https://www.garlic.com/~lynn/2006p.html#22 Admired designs / designs to study
https://www.garlic.com/~lynn/2006p.html#26 Admired designs / designs to study

Why magnetic drums was/are worse than disks ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why magnetic drums was/are worse than disks ?
Newsgroups: alt.folklore.computers
Date: Fri, 27 Oct 2006 12:41:17 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
when i was trying to get 3350 hardware change to support multiple exposures, i also redid the internal control block structure for page allocation on disk ... from being purely device oriented to being arbitrary device sub-areas ... having any organization. the default structure was all paging areas on a device ... following the previous purely device "speed" organization. However, it also allowed organizing the 3350 fixed head area on equivalent round-robin level with other fixed-head areas (like 2305 fixed-head disks). the combination of the 3350 hardware change for multiple exposures, the various page migration pieces, and the redo of the allocation control block struction (allowing arbitrary storage allocation policies) ... made the 3350 fixed head area significantly more useful.

re:
https://www.garlic.com/~lynn/2006s.html#59 Why magnetic drums was/are worse than disks ?

some old email discussing page allocation rework ... somewhat motivated by the 3350 fixed-head feature

Date: 01/19/80 10:45:54
From: wheeler
To: several people in POK

re: design for contiguous allocation of paging backing store. ref: SYSPAG script file.

New entry point can be defined in DMKPGT to allocate contiguous blocks (similar to the current 2305 code, possible to enhance that code so that it would serve both purposes). DMKPGM will be responsible for determining if continguous allocation areas are required. It will call PGT to allocate the contiguous space. A SYSPAG block will be built which is chained off of the VMBLOK & points to an ALOCBLOK for the contiguous storage. The SYSPAG block will also point back into the normal SYSPAG chain allocation. Normal PGT allocation on reaching the end of preferred , non-drum SYSPAG (at the smae place a test is made to determine if the preferred paging area 'full' flag is to be set), a check will be made for the existance of a VMBLOK's SYSPAG block. If one is found, the SYSPAG register pointer will be switched to the VMBLOK's SYSPAG block (the VMBLOK, SYSPAG block will point back into the SYSPAG chain if no room is found in the VMBLOK area). DMKUSO will be responsible for releasing all the control blocks and allocate areas.


... snip ... top of post, old email index

now in the following, the reference to DMKPGM was to page migration function that I had introduced as part of my resource manager
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

which included a lot of stuff i had done as an undergraduate for cp67 and then was subsequently dropped in the morph to vm370 ...

.... in new syspag, since there was a different structure for chain structure for allocation and deallocation ... just by removing all the relavent blocks from the allocation structure ... and then invoking page migration to move everything off a specific device ... there was no problem interlocking new pages getting allocated on the device (while page migration was running). the separate allocation and deallocation structures ... also made it possible to allow for a user-specific contiguous page allocation ... as mentioned in the above email.

Date: 02/27/80 18:58:19
From: wheeler
To: several people in POK

A couple of additional points showed up today. SYSPAG allocation blocks are alwas constructed whether anything was specified at all or not. Attach/CPI subroutine will build default structure/ordering if no SYSPAG specifications have been made. Benefit is that PGT code is small, simple, short , and fast and the same alwas. Complex desisions are made when blocks are built (and or merged). This is of a lot of benefit because of frequent use of PGT compared to DMKCPI.

Another point came up. Requirment was to take volumes off line. Turns out SYSPAG structure easily supports implementation. All allocation blocks are double threaded. One way for allocation search and a different chain for de-allocation. Take about 75% of existing code in DMKPGM, put in another 300 or so lines of code and you have vary off support of paging packs. New module would unchain all blocks for specific device from allocation chain, but leave on de-allocation chain. Then PGM code would be turned loose for pages on the specific device. When finished allocation blocks could be unchained from de-allocation chain and FRET'ed. I said in the document that design was open ended to allow dynamic re-organization of chains.


... snip ... top of post, old email index

a past post also mentioning syspag work:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

another past post mentioning syspag work ... as well as subsequent work on "big pages" (originally for 3380s) which somewhat subsumed much of the earlier demand page-at-a-time optimization:
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?

old vm370 mitre benchmark

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: old vm370 mitre benchmark
Newsgroups: alt.folklore.computers
Date: Fri, 27 Oct 2006 13:21:18 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
i tested the program with 3830 controllers on 4341, 158, 168, 3031, 3033, and 3081. it turns out that a 3830 in combination with 4341 and 370/168, the head switch command processed within the 101 byte rotation latency.

combination of 3830 and 158 didn't process the head switch command within the 101 byte rotation (resulting in a missed revolution). the 158 had integrated channel microcode sharing the 158 processor engine with the 370 microcode. all the 303x processors had a external "channel director" box. the 303x channel director boxes were a dedicated 158 processing engine with only the integrated channel microcode (w/o the 370 microcode) ... and none of the 303x processors could handle the head switch processing within the 101 byte dummy block rotation latency. the 3081 channels appeared to have similar processing latency as 158 and 303x channel director (not able to perform head switch operation within 101 dummy block rotation).

i also got a number of customer installations to run the test with a wide variety of processors and both 3830 controllers and oem clone disk controllers.


re:
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF

Problem with benchmarking Mitre paging changes is that they included "BU" (boston univ) 3330 paging performance enhancements. The problem was that only able to do 3330 multi-page transfer in single revolution on 145/148/4341 and 168 ... attempting it on 158 and any 303x could result in severe performance degradation ... because there would be additional full revolution for each page transfer ... which not only tied up the device ... but the disk controller (locking out all other disks on the same controller) and the channel.

the "2880" is the external channel box used by 370/168. 145/148/4341 had integrated channels ... as did the 370/158. as mentioned before, the 370/158 integrated channels was basis for the external 303x channel director box used by all 303x models.

Date: 02/28/80 12:12:17
From: wheeler
To: numerous vm370 product support people
re: benchmark Mitre paging changes;

somebody got a little 'took' on those changes. They include head switch 3330 changes from BU off the Waterloo tape. I sent out detailed info to at least some of you over a year ago.

Specs for 3330 don't allow enuf time to head switch between records with less than a 110 dummy record gap between pages. 3330 track is only spec'ed at about 100-101 bytes available for dummy record. BU went ahead an put in changes for head switching on their 145 system using 101 byte gaps (and optional code to count the number of successful head switches). I got same code benchmarked on 145, 158, 168, 3031, 3033; with ISCs, with 3830s, with&w/o string switching. Also got 168 benchmark with IBM drives, CDC drives, and Memorex drives.

145&168 on all combinations of IBM drives and CDC drives were able to successfully switch almost 100% of the time (Memorex drives could do head switch with 50 byte gaps 100% of the time).

158 at its best would only be successful about 20% of the time, the rest of the time requiring additional revolutions.

Turns out 145 channels are the best of the 370 line (rated around 2.7 megabyte capacity according to the engineers), then the 2880 and then the 158 channels. The 158 channels are by far the slowest of the three. It also turns out the 303x share the same hardware channel directors and all have performance numbers in this area comparable to the 158. It turns out that the channel directors are substantially 158 boxes. 148 are substantially 145 channels and I believe the 4341 channels are somewhat better.

I also 'warned' several IBM'ers to be wary of upgrading 168s with BU mods. and heavy 3330 paging to same configuration on any 303x because they may experience a degradation of thru-put.

I can somewhat understand the questionable benchmark results that was obtained by IBM running the Mitre mods. on 158 with 3330 paging.


... snip ... top of post, old email index

other posts in the referenced thread:
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#42 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#43 REAL memory column in SDSF

Why these original FORTRAN quirks?; Now : Programming practices

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?; Now : Programming practices
Newsgroups: alt.folklore.computers,comp.lang.fortran
Date: Fri, 27 Oct 2006 14:14:19 -0600
glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu> writes:
The mmm is incremented. You need to supply one DD card for each file you expect to read. There is no requirement that they be on the same tape, or even on tape at all. One reason is the extra work to process standard labelled tapes. There are label records between files, in addition to the tape mark.

VMS processes ANSI labelled tapes, and so may also do some of this processing. I did once convince OS/VS2 to write an ANSI labelled tape to read on VAX/VMS.


standard label had 80-byte "VOL1" record (similar to what is used on disk). files had 80-byte "HDR1" record (and sometimes "HDR2" records) ... dating back to at least 360 days.

here is more discussion (from quicky use of search engine) for "VOL1" and "HDR1"
http://it-dep-fio-ds.web.cern.ch/it-dep-fio-ds/Documentation/tapedrive/labels.html

above also discusses ansi x3.27 standard and dec systems use.

some of the current tape library software managing later generations of large capacity tapes do volume stacking ... where a single physical tape will contain images of multiple logical tape volumes.

search engine reference included volume stacking
http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/topic/com.ibm.filemanager5.doc/base/fmnu1e02233.htm

misc. other search engine URLs mentioning VOL1 and HDR1
http://www.vsoftsys.com/doc/doc42/vtape/vtap4006.htm
http://operations.rutgers.edu/tapes/tape_formats.html
http://www-03.ibm.com/servers/eserver/zseries/zvse/downloads/tools.html
http://groups.google.com/group/alt.sys.pdp11/browse_thread/thread/d043a37616ff4e19/ac76930d628ef0fa
http://www.decus.org/libcatalog/document_html/vs0097_8.html
http://cnlart.web.cern.ch/cnlart/234/art_tlab.html
http://www.cbttape.org/awstape.htm
http://mitvma.mit.edu/cmshelp.cgi?CMS%20TAPEMAC%20
http://mitvma.mit.edu/cmshelp.cgi?CMS%20TAPPDS%20
http://pcmap.unizar.es/softlinux/VMS-to-Linux-HOWTO.txt

search engine even turns up one of my old pasts mentioning VOL1 and HDR1:
https://www.garlic.com/~lynn/2004q.html#20 Systems software versus applications software definitions

which discusses an old backup/archive system that i had written for internal use ... which then went thru several iterations and eventually released as workstation datasave facility, morphed into ADSM and is now known as TSM
https://www.garlic.com/~lynn/submain.html#backup

in Melinda's section on CMSBACK in her history
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

her history starts with what would be considered "version 3" (or maybe version 4) of CMSBACK. I had done the implementation and deployment for the initial version; then another person and I did quite a bit of enhancements. The other person that I worked with, left IBM and went on to do development for some number of other (non-ibm) vm archive/backup systems (and I started work on other projects). It was in this period, that it was turned over to the two people mentioned in Melinda's history related to CMSBACK.

Between the start of that phase in CMSBACK development and the release of workstation datasave facility (precursor to ADSM), there was an effort (not mentioned in Melinda's history) to get Endicott to make it available as a product. Endicott evaluated marketing one of the non-IBM products or releasing the internal CMSBACK. At that time, Endicott chose to market one of the non-IBM products (much of that development had been done by the same person that I had worked with on the earlier CMSBACK version).

from Melinda's paper
ADSM had its origins in about 1983 as a program called CMSBACK, written by two VM support people at IBM's Almaden Research, Rob Rees and Michael Penner. CMSBACK allowed CMS users to back up and restore their own files. It quite naturally became the basis a few years later for a program to allow workstation users to back up and restore their files, which was announced as the WDSF/VM product in 1990. By that time, VM Development in Endicott had gotten involved, doing much of the work required to bring the new product to market.

... snip ...

CMSBACK actually dates back a number of years earlier, before Almaden was built and/or the two mentioned individuals had even been hired.

Are there more stupid people in IT than there used to be?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are there more stupid people in IT than there used to be?
Newsgroups: alt.folklore.computers
Date: Fri, 27 Oct 2006 14:58:22 -0600
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
The larger the group, the lower the effective intelligence of that group, regardless of the intelligence of its individual members. I suspect the formula is something like the one for calculating resistances in parallel.

some quantifications about group intelligence as the lowest member iq

min(iq1, iq2, ... iqN)

but then instead of the avg

sum(iq1, iq2, ..., iqN)/N

it may be more like

sum(iq1, iq2, ..., iqN)/(N**2)

sum of the inidividual IQs divided by the square of the number of individuals ... instead of tending to the lowest IQ in the group ...it quickly tends to zero as the size of the group grows (avg group IQ divided by the number of members) ... possible

min(iq1, iq2, ..., iqN)/N

or the minimum IQ divided by the number of members.

there has been this constant thread about not enuf trained resources dating back to at least the mid-70s ... and probably long before.

in the mid-70s ... the 370/148 project wanted to go after a greatly increased market ... but they recognized that there wasn't enuf traditional skilled resources. they needed to dumb down the skill requirements for supporting systems (although they didn't express it in those terms) ... which in turn required layer of software that automatically did lots of the stuff that had previously been done by skill support person.

the two analogies that were frequently used in the mid-70s were

1) Ford trying to figure out how to drastically expand market for the Model T ... up until then automobiles required a chauffeur that was also a fairly skilled mechanic. over time, automobile use was drastically simplified so that they could be operated by people that may not have the slightest idea of how they worked.

2) phone company project that to expand telephone use would require hiring at least half of the adults in the country as operators. the phone paradigm was changed with each user becoming their own operator ... aka they did their own dialing ... which was then handled by automatic switching equipment.

part of expanding the market was making the computer product significantly less expensive and more of a commodity. at some point, the market inhibitor became not the expense of the product ... but the cost of the highly trained talent needed to support the product. at some point the necessary highly trained talent could be several times more expensive than the product itself. so attempts are made to lower skill level necessary for operating computer products comparable to that necessary for operating an automobile (or a telephone).

threads versus task

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: threads versus task
Newsgroups: comp.arch
Date: Sat, 28 Oct 2006 10:09:34 -0600
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Multi-tasking means that you can run several programs (seemingly) concurrently. Each program would run in its own task. The programs may run in the same address space, or in different ones, protected from each other or not. "Task" is not a very well-defined concept, or maybe it is just a very general concept (and I will use it as general term in the rest of this posting); anyway, to have something with a more specific meaning, new terms were introduced:

A "process" (as used in Unix and most OS literature) is a task that runs in its own address space, protected from other processes, and commiunicating with them only through various OS mechanisms.

At some point people wanted to write programs consisting of several tasks. One can do this by having several processes for the program, but that is often suboptimal, because the communication between tasks is often more important for such programs, and having to go through OS mechanisms can be too slow; also, the protection from other processes is often not needed between such tasks.


os/360 had TCB or task control blocks ... for scheduling, execution and resource management. There were system/language construct ATTACH that could do create execution tasks under the primary scheduling control and system WAIT/POST for coordinating operations in multiprogramming environment. All of this started out in the same (real) address space.

For a lot of things, TCB was fairly heavyweight system construct ... especially for some of the evolving online applications. So you saw "subsystems" like CICS showing up in the late 60s that supported their own (lightweight) tasks (very much akin to the later lightweight threads some 20 years later) ... where the "subsystem" providing its own (lightweight) process creation and serialization operations. In the later 70s, you might find a CICS regions controlling tens of thousand and then hundreds of thousands of ATM machines ... even later there would be CICS regions that handled millions of (cable) settop boxes.

in any case, tasks were system (or subsystem) control and resournce management construct. multiprogramming could be done with system tasks or cooperating operations within a single task (and single address space). this got more complicated when os/360 morphed into virtual address space operation for each "task" in the mid-70s as "MVS" (default was that "task" and "address space" was somewhat equivalent ... but then you could use system tasking facilities to invoke multiple tasks in the same address space).

at the science center in the late 60s and early 70s, Charlie was doing a lot of work on multiprocessor fine-grain locking in the cp67 kernel and invented the compare&swap instruction (compare&swap mnemonic chosen because CAS are Charlie's initials). The initial attempt to try and get compare&swap included as part of (the new) 370 architecture was rejected on the grounds that test-and-set (left over from 360) was sufficient for multiprocessing control.

The challenge was that in order to get compare&swap justified in 370 ... a non-multiprocessor specific use was needed. Thus was born the use examples for atomic instruction for multiprogramming operations ... that aren't necessarily multiprocessor related. In the early 70s, the multiprogramming use examples were originally part of the instruction programming notes in the 370 principle of operations. later principles of operation moved the use examples to the appendix.

a couple recent posts mentioning comapre&swap
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant HLASM coding practices ever defensible?

while multiprogramming and system TASKs are somewhat loosely related ... TASKs can be in different address spaces ... and multiple execution in the same address space can be done w/o (heavyweight) system TASKS.

recent principles of operation appendix reference

A.6 Multiprogramming and Multiprocessing Examples
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320

from above:
When two or more programs sharing common storage locations are being executed concurrently in a multiprogramming or multiprocessing environment, one program may, for example, set a flag bit in the common-storage area for testing by another program. It should be noted that the instructions AND (NI or NC), EXCLUSIVE OR (XI or XC), and OR (OI or OC) could be used to set flag bits in a multiprogramming environment; but the same instructions may cause program logic errors in a multiprocessing configuration where two or more CPUs can fetch, modify, and store data in the same storage locations simultaneously.

Subtopics:

• A.6.1 Example of a Program Failure Using OR Immediate • A.6.2 Conditional Swapping Instructions (CS, CDS) • A.6.3 Bypassing Post and Wait • A.6.4 Lock/Unlock • A.6.5 Free-Pool Manipulation • A.6.6 PERFORM LOCKED OPERATION (PLO)


... snip ...

threads versus task

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: threads versus task
Newsgroups: comp.arch
Date: Sat, 28 Oct 2006 13:07:19 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Before MVS/XA, it was almost impossible for an application to switch address space, even if privileged. Theoretically, it was there a bit earlier, but ....

re:
https://www.garlic.com/~lynn/2006t.html#22 threads versus task

the mid-60s os/360 philosophy heavily revolved around *scarce* real storage assumptions. the apis were all pointer passing and the i/o model involved direct transfers involving application buffers. the physical i/o commands were built directly by applications pointing to their real buffer addresses (and/or library routines directly called by applications).

the i/o library routines provided a number of asynchronous and synchronous abstractions. the asynchronous abstractions for i/o provided a natural multiprogramming environment ... where application logic might possibly be dealing with multiple i/o buffers and overlapping execution with i/o transfer ... requiring various amounts of WAIT operations for serialization.

one might assert that the posix activity from the late 80s and early 90s for async i/o and threading was to translate this mid-60s paradigm into unix environment (some number of dbms applications would take advantage of such features ... somewhat emulating operation of online os/360 dbms activity dating back to the 60s).

the os/360 transition to virtual address spaces in the mid-70s carried with it a number of problems.

the application convention of building the physical i/o commands no longer used real addresses ... but were virtual (buffer) addresses. the low level kernel function supporting application i/o had to be enhanced to scan/interpret the application i/o command operations, create *shadow* copy of the application i/o command sequence, translating all the virtual addresses into real addresses (as well as fetching and pinning the associated virtual pages until the real i/o sequence had completed).

the prevalent use of pointer passing in APIs resulted in the kernel code sharing the application address space. Initially in mvs/370 with 16mbyte virtual address spaces, one per application ... the kernel occupied half of the 16mbytes, in theory leaving 8mbytes for each application.

however, the os/360 heritage also had a lot of "subsystem" services that weren't part of the kernel, each were now in their own unique virtual address space ... but still "called" by applications with pointer passing api. Initially this was addressed by a "common segment" that appeared in every virtual address space where parameters could be stashed and then directly accessed by subsystems using the passed (parameter address) pointers. However, for large operations with a large number of possible different subsystems, it wasn't uncommon to find that the "common segment" had to be expanded to five mbytes. With the kernel taking 8mbytes out of each address space and "common segment" taking 5mbytes (and potentially still growing), it would only leave 3mbytes for applications use.

to help address this problem, 3033 (introduced in the late 70s) provided "dual-address" space support. A new addressing mode was provided to semi-privileged subsystems (running in their own address space) that allowed them to "reach" across into application virtual address space and access the parameters that needed to be referenced in the pointer-passing API. A subsystem implementation, running in its own virtual address space could be multiprogrammed with each invokation having its own state consisting of registers and instruction address ... as well as control register pointing to the invoking application virtual address.

Later (mvs/xa) 370 virtual addressing was increased from 24bit to 31bit and dual-address space support was generalized with access registers and multiple address space operation. Not only could subsystems reside in their own address space ... but a lot of the former system library software that was loaded and executed as part of an application could be moved into unique semi-privileged address space. A new hardware call instruction was defined that referenced a system defined table which controlled switching address space pointers and some number of other access control features. The operation could appear still as if it was direction executing under the application task control block (and resource control) ... but instead of execution continuing in a library routine in the application address space ... execution was continuing in a library routine that resided in a totally different address space. more detailed discussion of access registers and multiple address space operation
https://www.garlic.com/~lynn/2006r.html#32 MIPS architecture question - Supervisor mode & who is using it?

Sort of the original default was a single TCB (for resource control) mapping to a single application (and later a single address space). However, various options and features allowed this to get potentially extremely complicated with possibly multiprogramming under a single TCB (providing something akin to multi-threading in single process), multiple TCBs operating in the same address space, a single TCB with multiple address spaces, and/or multiple TCBs and multiple address spaces.

in unix environments ... w/o posix async i/o and threading ... you had DBMS translated (to unix) where there were multiple different processes and address spaces ... with the address spaces setup to share a lot of common storage (in order to simulate lightweight threading in a single address space). early on, the DBMS would implement "raw" I/O to the physical device ... in order to achieve the effect of asynchronous operation and direct transfers to/from application memory/buffers.

somewhat in parallel with os/360 with real storage later morphing into virtual memory MVS system ... there was the original cp67 virtual machine system ... started originally in 1966 as cp40 with a custom modified 360/40 with virtual memory. In 1967, cp40 morphed into cp67 when standard virtual memory 360/67 machines became available. cp67 made extensive use of the virtual memory hardware in its virtual machine implementations. later cp67 morphed into vm370 when virtual memory became available on 370s. melinda's vm370 history, mentioning ctss, multics, cp40, cp67, etc
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

the original relational/sql implementation was system/r done in the 70s on vm370 platform
https://www.garlic.com/~lynn/submain.html#systemr

this made use of multiple virtual address spaces with some amount of the virtual address space shared across independent cooperating processes (virtual machines) recent post mentioning some of the technology transfer for system/r from san jose to endicott for sql/ds product
https://www.garlic.com/~lynn/2006t.html#16 Is the teaching of non-reentrant HLASM coding practices ever defensible?

it is somewhat this model in the mid to late 80s, that you saw some DBMS vendors translating into unix environment with multiple processes each with their different address spaces ... where the different address spaces shared some amount of common storage (and DBMS was using unix "raw i/o" for accessing disks for futher asynchronous operation)

some other recent posts mentioning dual-address space and/or access register operation
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006r.html#26 A Day For Surprises (Astounding Itanium Tricks)
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

CMSBACK

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: CMSBACK
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sat, 28 Oct 2006 14:17:47 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
search engine even turns up one of my old pasts mentioning VOL1 and HDR1:
https://www.garlic.com/~lynn/2004q.html#20 Systems software versus applications software definitions

which discusses an old backup/archive system that i had written for internal use ... which then went thru several iterations and eventually released as workstation datasave facility, morphed into ADSM and is now known as TSM
https://www.garlic.com/~lynn/submain.html#backup


re:
https://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; Now : Programming practices

old email from the other person working on CMSBACK version 2. this is on enhancing the pattern matching capability in the user interface file retrieval process (from the backup/archive repository).

as noted in the above reference, Melinda's history starts with what would be CMSBACK version 3 (or possibly 4 ... depending on how you classify all the work done between my initial CMSBACK deployments and start of work by the people mentioned in Melinda's history).
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

Date: 10/25/79 02:11:58
To: wheeler

I had to make one change to OSPATM to make it work. The macro SREGS is a bit too much for me tonite so I replaced the one to set up addressability to the work area using R5 with the L R5,8(,R1) and USING PATSPC,R5. AND...... it works like a super star!!!! Try issuing the CMSBACK exec, ask for a report, and enter whatever patterns you like.. Disk load the returned RPT file and see what you get.. I've been playing with it now for a while and it really works great. (I will make the matching available on date and time tommorrow.. why stop with just the filename and filetype).


... snip ... top of post, old email index

for the original low-level CMSBACK interface to tape, I used a highly modified version of VMFPLC (mentioned in the old email below) that I had renamed VMXPLC. Not mentioned in the below ... but it was also enhanced to provide optimal processing for the paged mapped filesystem implementation that I had done
https://www.garlic.com/~lynn/submain.html#mmap

start of buffers were page aligned ... and 15 800 byte data blocks could be three 4096 byte blocks. For small files, the minimum size data block on tape could be 800 bytes. With separate FST record and data block, a tape with a lot of small files would have half (or more) of the length could be taken up with interrecord gaps. Merging the FST record and the first data record into the same physical tape record, would cut the tape devoted to interrecord gaps in half (when lots of small files was involved).

Date: 03/21/80 13:41:39
From: wheeler
To: somebody in endicott

The original VMFPLC was an update to release 2 DMSTPE. The code took the FST record which is placed in a trailing 800 byte record following the file dumped and placed it in a 59 byte record in front of the file dumped. It also blocked up to 5 800 byte data blocks per physical tape block. I captured the update early and maintained it (I heard that the development lost it and didn't have a copy) against the current release & plc level. I also enhanced the update to block up to 15 800 byte data blocks, to merge the FST record into the 1st physical data block dumped and to avoid rewriting the MFD after each file loaded.

VMFPLC2 appears to have several new features which would indicate that it is not a modification of the original VMFPLC (since as far as I know, I'm the only one with the source) but it still maintains the original tape format. I must confess that I've not looked at the new release 6/bsepp tape source (although I've heard that it has grown substantially and has been split into several files).


... snip ... top of post, old email index

and for an even earlier CMSBACK related email. standard CMS could "share" various filesystem areas across multiple virtual machines. However, the filesystem status information would be duplicated in every virtual address space. A hack was done to place a copy of some shared CMS filesystem information in shared (r/o protected) memory ... so there only need to be a single physical copy of the filesystem metadata (shared across everybody). The "problem" for DMSDSK, DMSTPE, VMFPLC and VMXPLC was that there was a hack where they temporarily modified the file metadata that forced the logical record size to match the physical disk record ... and then restore it when they were done. This hack would fail if the file metadata information was located in shared (r/o protected) memory.

Date: 04/02/79 19:22:35
From: wheeler

Yorktown has done an update for DMSDSK (DISK DUMP) to handle the problem of dumping from a disk with FST in shared memory (DMSDSK YK187DMS). Can you merge with what you have been doing to DMSTPE for TAPE, VMFPLC, & VMXPLC??????


... snip ... top of post, old email index

and a related reference from Almaden Research
http://www.almaden.ibm.com/StorageSystems/Past_Projects/TSM.shtml

Are there more stupid people in IT than there used to be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are there more stupid people in IT than there used to be?
Newsgroups: alt.folklore.computers
Date: Sun, 29 Oct 2006 00:33:33 -0600
Charlton Wilbur <cwilbur@mithril.chromatico.net> writes:
Yes, this crew wanted to transmit credit card data in the clear because they could not figure out how to establish an HTTPS connection.

My manager's response was, "sure! as long as you put in writing that you're choosing to do it, and you accept the liability should we get sued." *That* caused a flurry of activity.


recent post in this thread about some threats in this area
https://www.garlic.com/~lynn/2006t.html#5 Are there more stupid people in IT than there used to be?

and some other recent posts discussing some related threat issues:
https://www.garlic.com/~lynn/2006s.html#10 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006s.html#11 Why not 2048 or 4096 bit RSA key issuance?
https://www.garlic.com/~lynn/2006t.html#2 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2006t.html#8 Root CA CRLs

various past collected posts related to assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

as opposed to various past collected posts related to threats, vulnerabilities, fraud, and/or exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud

and a post from today about possibly new exploit
https://www.garlic.com/~lynn/aadsm25.htm#46 Flaw exploited in RFID-enabled passports

Universal constants

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Universal constants
Newsgroups: alt.folklore.computers
Date: Sun, 29 Oct 2006 00:53:01 -0600
Steve O'Hara-Smith <steveo@eircom.net> writes:
Let an economist in and it will soon get complex.

for quite a bit of drift ... I had made a number of posts last spring and this summer mentioning a talk given by the comptroller general ... the subject matter has apparently just been discovered in the past day or two

GAO Chief Warns Economic Disaster Looms, GAO Chief Takes to Road, Warns Economic Disaster Looms Even As Many Candidates Avoid Issue
http://www.cbsnews.com/stories/2006/10/28/ap/business/mainD8L1OC5G0.shtml
GAO Chief Warns Economic Disaster Looms
http://www.washingtonpost.com/wp-dyn/content/article/2006/10/28/AR2006102800420.html
GAO Chief Warns Economic Disaster Looms
http://news.moneycentral.msn.com/provider/providerarticle.asp?feed=AP&Date=20061028&ID=6145849
GAO Chief Warns Economic Disaster Looms
http://abcnews.go.com/Politics/wireStory?id=2613135
GAO Chief Warns Economic Disaster Looms
http://www.latimes.com/news/nationworld/politics/wire/sns-ap-america-the-bankrupt,1,515499.story?coll=sns-ap-politics-headlines
GAO Chief Warns Economic Disaster Looms
http://hosted.ap.org/dynamic/stories/A/AMERICA_THE_BANKRUPT?SITE=DCUSN&SECTION=TOP_STORIES&TEMPLATE=DEFAULT
GAO Chief Warns Economic Disaster Looms
http://apnews.myway.com/article/20061028/D8L1OC5G0.html
GAO chief warns economic disaster looms
http://seattlepi.nwsource.com/national/1155AP_America_the_Bankrupt.html

posts from last spring about his talk
https://www.garlic.com/~lynn/2006g.html#9
https://www.garlic.com/~lynn/2006g.html#14
https://www.garlic.com/~lynn/2006g.html#27
https://www.garlic.com/~lynn/2006h.html#2
https://www.garlic.com/~lynn/2006h.html#3
https://www.garlic.com/~lynn/2006h.html#4
https://www.garlic.com/~lynn/2006h.html#17
https://www.garlic.com/~lynn/2006h.html#19
https://www.garlic.com/~lynn/2006h.html#33

and post from late summer with some excerpts from his talk
https://www.garlic.com/~lynn/2006o.html#61

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Sun, 29 Oct 2006 08:51:58 -0700
The Future of CPUs: What's After Multi-Core?
http://www.informit.com/articles/article.asp?p=663085&rl=1

from above (the new, 40yr old thing)
This rule was driven home to me when I attended a talk by an IBM engineer about his company's new virtualization technology. He commented that his company had an advantage over other people working in the area: Whenever they were stuck, they could go along the hall to the mainframe division and ask how they solved the same problem a couple of decades ago.

... snip ...

semi-related thread from comp.arch
https://www.garlic.com/~lynn/2006t.html#23 threads versus task

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers,comp.lang.fortran
Date: Sun, 29 Oct 2006 09:11:20 -0700
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
I don't think I could justify SAS for home, and companies and staff often have to get by with what's made available; only those with a larger machine heritage seem to have expensive products like SAS, because they kept the software around when they downsized machines.

recent post about using multiple regression analysis to identify a significant performance improvement (large financial application that ran for hours on large number of fully decked out mainframes)
https://www.garlic.com/~lynn/2006s.html#24 Curiousity: CPU % for COBOL program

I was using a free package off the web that had various limits on the number of variables that it could handle ... which would have been pretty much eliminated using SAS.

the original reference to performance analysis work in the early and mid-70s at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

that eventually evolved into things like capacity planning
https://www.garlic.com/~lynn/submain.html#bench

had used multiple regression analysis from the fortran scientific subroutine package. i have some vague memory that when the scientific subroutine package was discontinued ... all of that stuff was possibly picked up by SAS(?).

the other language that was used extensively in that period was APL ... including significant amount of performance and system modeling. One such (APL) application from the science center eventually evolved into the performance predictor available on the internal HONE system providing worldwide support to field, sales, and marketing people
https://www.garlic.com/~lynn/subtopic.html#hone

customer profile information could be input (workload, configuration, performance, etc) and "what-if" questions could be asked regarding what happens if there were changes in workload and/or configuration.

in the 70s and 80s there was quite a bit of use of APL for modeling and "what-if" scenarios ... a lot of which subsequently migrated to various spreadsheet technologies.

Storage Philosophy Question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Storage Philosophy Question
Newsgroups: bit.listserv.ibm-main
Date: Sun, 29 Oct 2006 09:47:50 -0700
rfochtman@ibm-main.lst (Rick Fochtman) writes:
He may also have been thinking about the "Great Chicago Flood" of a few years ago, when several prominent Chicago banks learned the folly of computer rooms in basements and sub-basements.

previous post in thread:
https://www.garlic.com/~lynn/2006s.html#28 Storage Philosophy Question

i remember places like Chicago Board of Trade also being affected

wasn't it construction or some other event that resulted in a break in subsurface dam/barrier that was designed to keep the lake out?

note that even if the computer rooms aren't there ... lots of the related utilities, power backup, etc ... frequently are.

this is akin to the multitude of backhoe vulnerabilities; possibly one of the most famous was the isolating the new england internet several years ago. supposedly the original infrastructure had carefully laid out diverse routing for nine different circuits running over nine different physical trunks. over the years, while nobody was paying attention, all the circuits were eventually consolidated into one physical trunk ... which one day was attacked by a backhoe and taken out.

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers,comp.lang.fortran
Date: Sun, 29 Oct 2006 10:37:39 -0700
jmfbahciv writes:
CAreful. The linkers and loaders aren't separate. What is even worse is that the kiddies who think they know how machines work, are not aware of the reasons for linkers and loaders; they assume, rightly from their experience, that it is all one procedural step. This is a loss of knowledge that is happening right now.

one of the other things left over from the os/360 real memory days is figuring out where the program image was to be loaded ... and then having to swizzle all the "relocatable" address constants that were frequently randomly distributed thru-out the program image.

os/360 compilers and assemblers generated 80byte cards with 12-2-9 (0x02 hex) in column followed by ESD, TXT, RLD, etc.

ESD cards gave the "external" symbol information ... the names of entry points into the program and names of external applications.
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)

TXT cards contained up to 56 bytes of actual program instructions and data
https://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)

RLD cards contained the location (displacement in the program) of "relocatable" address constants.
https://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early computer games

once the linker/loader had decided on the address for loading the program image ... it then had to run thru all the RLD information, finding the associated "relocatable" address constants and swizzling their values to correspond to actual memory address. that is in addition to "resolving" addresses (swizzling them also) of external program entry points.

recent posting discussing issues with os/360 convention of relocatable address constants
https://www.garlic.com/~lynn/2006j.html#38 The Pankian Metaphor
https://www.garlic.com/~lynn/2006m.html#30 Old Hashing Routine
https://www.garlic.com/~lynn/2006s.html#61 Is the teaching of non-reentrant HLASM coding practices ever defensible?

lots of posts about difficulty of translating os/360 (real memory) relocatable address constant convention into page mapped filesystem environment attempting to invoke program images that were r/o shared across multiple address spaces (and not being able to modify/alter the program image)
https://www.garlic.com/~lynn/submain.html#adcon

misc. others posts discussing format of 12-2-9 cards
https://www.garlic.com/~lynn/93.html#17 unit record & other controllers
https://www.garlic.com/~lynn/95.html#4 1401 overlap instructions
https://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002h.html#1 DISK PL/I Program
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2004h.html#17 Google loves "e"
https://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005c.html#54 12-2-9 REP & 47F0
https://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005t.html#47 What is written on the keys of an ICL Hand Card Punch?
https://www.garlic.com/~lynn/2006b.html#1 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#17 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2006g.html#58 REP cards
https://www.garlic.com/~lynn/2006n.html#1 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006n.html#11 Not Your Dad's Mainframe: Little Iron

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Sun, 29 Oct 2006 12:50:32 -0700
jsavard writes:
IBM might well remember why the IBM PC was so successful in the first place. Yes, the "IBM name" was a factor, but the product had its own intrinsic merits. It was competing against Z-80 based computers; *they* could be expanded to 64 K (or even 128 K with certain kludges) while the IBM PC could be expanded to 640 K.

original ref:
https://www.garlic.com/~lynn/2006t.html#27 The Future of CPUs: What's After Multi-Core?

my frequent theme has been that one of big PC market penetrations was into commercial ... where a PC was about the same price as a 3270 ... could provide both 1) 3270 terminal emulation and 2) some local desktop computing in a single screen/keyboard footprint.
https://www.garlic.com/~lynn/subnetwork.html#emulation

once that enormous market penetration had been obtained ... it was difficult for anything else to compete (something of a snowball effect, large install base attracted a lot of application programmers, a lot of applications attracted bigger market). clones and plug-compatible was one of the few remaining approaches.

and for other drift related to cluster scale-up
https://www.garlic.com/~lynn/95.html#13

and ha/cmp work
https://www.garlic.com/~lynn/subtopic.html#hacmp

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Mon, 30 Oct 2006 12:56:05 -0700
jsavard writes:
But while experience at Cray might prove to have *some* value for IBM engineers working on the next generation of micros, IBM has plenty of experience, with machines like Blue Gene, with massively parallel arrays of machines to draw on as well.

re:
https://www.garlic.com/~lynn/2006t.html#27 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#31 The Future of CPUs: What's After Multi-Core?

and for other drift than the previous mention related to working on cluster scale-up while we were doing ibm's HA/CMP product
https://www.garlic.com/~lynn/subtopic.html#hacmp

sort of part of the earlier HSDT project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

... a recent HSDT post or two
https://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#11 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006t.html#12 Ranking of non-IBM mainframe builders?

... the original ibm mainframe tcp/ip product had been implemented in vs/pascal ... however because of various issues ... it would consume nearly a whole 3090 processor getting 44kbytes/sec sustained thruput.

i did a rfc 1044 driver implementation (that was eventually shipped in the product) ... which in some tuning at cray research got 1mbyte/sec sustained between a cray and 4341-clone ... using only a modest amount of the 4341-clone.
https://www.garlic.com/~lynn/subnetwork.html#1044

trivia item was we were to leave on flight from sfo to Minneapolis for dong some testing ... and was 20 mins or so late getting off the ground. part way thru the flight, i noticed a lot of whispering back in the galley and wandered back to see what it was all about. apparently something like five minutes after leaving the ground the earthquake had hit.

before the previously mentioned incident that drastically changed their direction
https://www.garlic.com/~lynn/95.html#13

kingston had been providing some assistance to chen ... as well as working on their own machine of somewhat similar design. for other recent drift along this line.
https://www.garlic.com/~lynn/2006q.html#9 Is no one reading the article?

threads versus task

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: threads versus task
Newsgroups: comp.arch
Date: Mon, 30 Oct 2006 23:52:25 -0700
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
How much different is BSAM today than BSAM for OS/360?

re:
https://www.garlic.com/~lynn/2006t.html#22 threads vesus task
https://www.garlic.com/~lynn/2006t.html#23 threads vesus task

reference overview
http://www-03.ibm.com/servers/eserver/zseries/library/refguides/pdf/gm130117.pdf

from above:
64-bit Real Storage Support
...
These z/OS functions are enhanced to exploit 64-bit real storage above 2 GB:

• Traditional access Methods (BSAM, QSAM, and others)


... snip ...

or try this assembler example program of highly asynchronous processing with BSAM
http://www.xephon.com/arcinframe.php//m033a08

then there is IBM Tivoli Storage Manager Performance Tuning Guide
http://publib.boulder.ibm.com/tividd/td/TSMM/SC32-9101-01/en_US/HTML/SC32-9101-01.htm

from above:
To use the BSAM overlap I/O buffering methods, a new server option is available:

TAPEIOBUFS number of buffers

The number of buffers specifies a number from 1 to 9. If 1 is specified, then Tivoli Storage Manager does not use the overlapping I/O for 3590 tape media. If a number greater than 1 is specified, Tivoli Storage Manager allocates the buffers based on the above formula.

The use of this option may increase the I/O throughput with the 3590 tape media, but it requires more memory allocation for the address space. An optimized throughput scenario is using this option set to 9, with the UNIX System Service Socket and a Gigabit Ethernet.


... snip ...

a couple recent posts mentioning TSM history
https://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; Now : Programming practices
https://www.garlic.com/~lynn/2006t.html#24 CMSBACK

Measuring I/O
http://as400bks.rochester.ibm.com/tividd/td/TDS390/SH19-6818-08/en_US/HTML/DRLM9mst60.htm

the above has a short description of "access methods" (i.e. library routines like bsam, qsam, etc, that build i/o channel program and then invoke "EXCP/SVC0" to pass the built i/o channel programs to the kernel for processing. QSAM library routines does the handling for buffer management and the WAIT serializations operations. BSAM library routines places that responsibility on the applications.

this is old research report describing os/360 data management and mentioning bsam
http://www.research.ibm.com/journal/sj/051/ibmsj0501D.pdf

this is lengthy z/OS concepts with bits and pieces overview of nearly everything
http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/topic/com.ibm.zconcepts.doc/zconcepts.pdf

this is (vol2) z/OS assembler sevices macro manual ... that includes WAIT macro, which has some new forms like if you are in cross memory mode.
http://publibz.boulder.ibm.com/epubs/pdf/iea2a920.pdf

and this is "vol1" ... which includes the ATTACH/ATTACHX macro for creating new task:
http://publibz.boulder.ibm.com/epubs/pdf/iea2a760.pdf

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Tue, 31 Oct 2006 09:02:32 -0700
jsavard writes:
*That* reminds me.

When IBM came out with the 103-key keyboard - the one that *finally* (admittedly, the PCjr keyboard was first) had both shift keys, the Enter key, and the backspace key *all* in the right places... one dissenting voice was heard.

John C. Dvorak asked in InfoWorld why we needed yet another keyboard layout, and noted that most people were satisfied with the AT layout. But one other objection he made would have seemed reasonable.

Why did IBM add two function keys, F11 and F12? Since existing computers didn't have them, who would ever write software to use them?

Now, that _would_ have been a reasonable objection. If you didn't know that IBM made a line of terminals with either 12 or 24 function keys - for which terminal emulation packages were already available on the PC platform, but which were forced into strange keyboard arrangements (using alt-1 for PF 1, up to alt-= for PF12) by the lack of F11 and F12 on the PC keyboard.

And the terminal? The 3270.


we had several arguments with the introduction of the 3274 controller and 3278 terminal in the late 70s, ... one was that they initially eliminated the 12 pfkey pad on the right.

3277 keyboard layout (from your web page)
http://www.quadibloc.com/comp/kyb01.htm

the 3277 had a lot of electronics in the head and keyboard of the terminal (and not all back in the 3272 controller). to cut down on manufacturing costs ... for the 3278 ... a lot of the electronics were moved back into 3274 controller (as well as cutting down on keys). this contributed to the increased response time ... and it also made it impossible to do some of the local hardware fixes on the terminal.

recent post that includes timing comparison of 3272/3277 against 3274/3278 (both channel attach):
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

we also had a "FIFO" box that could be placed inline between the 3277 keyboard cable and where it plugged into 3277 display head ... that handled the half-duplex tendency to sporadically lock the keyboard and block keystrokes. also were able to add resister inside the 3277 keyboard that adjusted the typamatic delay and typamatic repeat rate .. to some reasonable value ... past post discussing both timing comparison as well as being to do local fixes to 3277 terminal:
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design

the original 3278 keyboard took the program function key position for numeric keypad ... and made program function keys alternates ... past post mentioning this
https://www.garlic.com/~lynn/2005e.html#33 Stop Me If You've Heard This One Before

when we started arguing about with the 3278 product group about program function keys (and slow 3274 controller and other stuff) ... they came back and said that the 3278 terminal wasn't designed for programmers and interactive computer use ... but for data entry applications.

you later got 3278 keyboard option with the pf1-pf12 across the top and pf13-pf24 on the right.

another of your web pages
http://www.quadibloc.com/comp/scan.htm

Universal constants

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Universal constants
Newsgroups: alt.folklore.computers
Date: Tue, 31 Oct 2006 09:24:19 -0700
jmfbahciv writes:
Thanks. I try to take a look at the webs. I'm reading Schwartzkopf's autobiography. Right now he's in the mid-70s and implementing something the army called TRADOC. The approach sounds a lot like what Boyd was figuring out. I can't find my Boyd book so I can't see if it mentions this retraining program.

re:
https://www.garlic.com/~lynn/2006t.html#3 Universal constants

note that in several of the Boyd articles about desert storm and the Military Channel Legends of Air Power program on boyd ... they mentioned conflict between schwarzkopf and boyd over the battle plan ... boyd calling schwarzkopf plan something like "hey diddle, diddle, up the middle" (with tanks slugging it out until the winner is the last one standing).

using search engine for: desert storm schwarzkopf boyd hey diddle

Boyd's tactics and Operation Iraqi Freedom (illuminating background on Iraq strategy)
http://www.freerepublic.com/focus/f-news/899525/posts

from above:
Coram: When Cheney became secretary of defense, he was rare in that he knew more about strategy than most of his generals did. He called Boyd out of retirement in the early days of the Gulf war, and from him got an updating, if you will. And it was Boyd's strategy, not [Gen. Norman] Schwarzkopf's, that led to our swift and decisive victory in the Gulf war.

According to Bob Woodward's book, "The Commanders," Schwarzkopf was "playing" the DC warplanners when he gave them his initial battle plan (Hey-diddle-diddle, Up-the-middle). He expected that it would be rejected and that he would therefore get the extra troops he was asking for.


... snip ...

The Legends of Air Power boyd program mentions the Coram version (above), but not Woodward's

... misc. past posts mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd

misc. URLs from around the web mentioning boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Tue, 31 Oct 2006 12:48:47 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
That works some time. The Web didn't exist, non-hierarchical networks inside IBM didn't exist back, then and a slew of other problems. Sure Unix and Linux were/are behind some of the hierarchical mainframe ideas, but their didn't cut it in France.

SNA in the 70s ... somewhat in conjunction with fs
https://www.garlic.com/~lynn/submain.html#futuresys

and somewhat afterwards ... defined hierarchical communication architecture for big terminal networks ... pretty much wrapped around vtam/sscp and 3705/ncp (or actually vtam/sscp and 3705/ncp wrapped around the sna architecture ... there were some number of people that complained that it didn't matter what sna architecture specified ... if you were to interoperate with vtam/3705, it had to conform to whatever vtam/3705 did ... and the two weren't always kept in total sync).

in that same time-frame ... my wife worked on a competing peer-to-peer architecture (AWP39) that lost out to SNA. she then did a stint in the JES2 group and then was con'ed into taking a job in pok in charge of loosely-coupled architecture (mainframe for cluster) ... where she origianted Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

and had quite a few battles with the sna organization ... sort of resulting in a truce ... where non-SNA was allowed within the datacenter walls, but SNA was required whenever the wall of the datacenter was crossed.

this also gave us lots of headache in our high-speed data transport project (HSDT) starting circa 1980s
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and also contributed to the terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

and 3-tier architecture (and SAA) wars
https://www.garlic.com/~lynn/subnetwork.html#3tier

also in approx. the same time that arpa was starting up ... the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was originating at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

which was non-hierarchical, non-sna, non-vtam, and frequently non-3705 (much more akin to the peer-to-peer network architecture, AWP39, that my wife worked on in the mid-70s).

recent posts mentioning size of internal network passing 1000 nodes in 1983 (year that arpanet switched over to internetworking operation) ... also includes reference that there were approx. 250 hosts on the arpanet that required major changes as part of the 1/1/83 upgrade to internetworking
https://www.garlic.com/~lynn/2006k.html#3 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
https://www.garlic.com/~lynn/2006k.html#9 Arpa address

note while the previous reference indicated that there approximately 250 hosts on the arpanet that required major changes as part of the 1/1/83 upgrade to internetworking ... this ARPANET newsletter article was predicting that there might be 100 arpanet nodes by sometime in 1983
https://www.garlic.com/~lynn/2006k.html#40 Arpa address

it is possible that the difference between 100 arpanet nodes and 250 hosts ... was that on arpanet, the actual networking was handled by the outboard IMPs ... with hosts then running host-to-host protocol and connecting to the IMPs for the actual networking support (potentially allowing greater than one-to-one relationship between hosts and nodes).

by comparison, the internal networking implementation ... all of the networking support executed directly on each host (and had nothing to do with vtam ... which was the incarnation of hierarchical sna). any outboard telecommunication control unit ... purely provided the physical line point-to-point support (i.e. things like line scanner operation, translating between line signal rise/lower and bit or no-bit).

reference to computer history museum item on the more than 300 nodes/hosts on internal network in the 70s being instrumental in the development and evolution of rexx
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006r.html#7 Was FORTRAN buggy?

i.e. (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20050309184016/http://www.computinghistorymuseum.org/ieee/af_forum/read.cfm?forum=10&id=21&thread=7

from above:
By far the most important influence on the development of Rexx was the availability of the IBM electronic network, called VNET. In 1979, more than three hundred of IBM's mainframe computers, mostly running the Virtual Machine/370 (VM) operating system, were linked by VNET. This store-and-forward network allowed very rapid exchange of messages (chat) and e-mail, and reliable distribution of software. It made it possible to design, develop, and distribute Rexx and its first implementation from one country (the UK) even though most of its users were five to eight time zones distant, in the USA.

... snip ...

and recent posts about nsfnet backbone (operational precursor to modern internet):
https://www.garlic.com/~lynn/2006s.html#20 real core
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2006s.html#51 Ranking of non-IBM mainframe builders?

other posts this year mentioning AWP39 peer-to-peer networking architecture effort:
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
https://www.garlic.com/~lynn/2006j.html#31 virtual memory
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?

other posts this year mentioning internal network:
https://www.garlic.com/~lynn/2006.html#11 Some credible documented evidence that a MVS or later op sys has ever been hacked
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#9 Is there a workaround for Thunderbird in a corporate environment?
https://www.garlic.com/~lynn/2006b.html#12 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#35 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006c.html#14 Program execution speed
https://www.garlic.com/~lynn/2006e.html#25 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#27 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit
https://www.garlic.com/~lynn/2006j.html#8 ALternatives to EMail
https://www.garlic.com/~lynn/2006j.html#23 virtual memory
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2006j.html#43 virtual memory
https://www.garlic.com/~lynn/2006j.html#45 Arpa address
https://www.garlic.com/~lynn/2006j.html#49 Arpa address
https://www.garlic.com/~lynn/2006k.html#1 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006k.html#43 Arpa address
https://www.garlic.com/~lynn/2006k.html#56 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006l.html#21 Virtual Virtualizers
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#46 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006m.html#9 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
https://www.garlic.com/~lynn/2006m.html#26 Mainframe Limericks
https://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006n.html#5 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006n.html#26 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006n.html#36 The very first text editor
https://www.garlic.com/~lynn/2006o.html#34 Source maintenance was Re: SEQUENCE NUMBERS
https://www.garlic.com/~lynn/2006o.html#60 Greatest Software?
https://www.garlic.com/~lynn/2006o.html#64 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006q.html#7 Linux More Secure on System z?
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#5 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow (was: Real core)
https://www.garlic.com/~lynn/2006s.html#20 real core

Are there more stupid people in IT than there used to be?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Are there more stupid people in IT than there used to be?
Newsgroups: alt.folklore.computers
Date: Tue, 31 Oct 2006 16:44:02 -0700
for a little more drift

part of visits to various institutions the summer of 1981, reference to Bell Labs visit
https://www.garlic.com/~lynn/2006n.html#56 AT&T Labs vs. Google Labs R&D History

in the above reference there was some comparison about Bell Labs approach to personal computing and departmental servers vis-a-vis Xerox ... and a later post touching on the same subject:
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"

.....
'Xerox SDD' Xerox Systems Development Division (Palo Alto CA)

Visit to Xerox Systems Development Division

General Impressions

They have more machines than people. There are 300 machines for 200 employees. At least five of the machines are DORADOs (3 mips); the rest are a mixture of ALTOs, D machines, and Stars. Everyone has at least an ALTO in his office. All the machines are tied together with a 10 megabit Ethernet. On the net there are at least two file servers and various xerographic printers including a color printer.

The ALTO in Allen Brown's office had 28 megabytes of local storage. It gave off a low but definite fan noise. The Dorados are a bit smaller than a 2 by 3 file cabinet and are noisy (they are called APCs, *Armored Personal Computers*). The Stars seemed a bit quieter than Altos.

The Dorados were built because the Altos were felt to be too small. In particular they did not have enough power to do the processing required for preparing a color display with the response time they felt necessary. The Dorado's are also part of a very conscious effort to build in today's technology machines which Xerox thinks can be built inexpensively by the end of the decade. They are not (within reason) concerned about the manufacturing cost today. This philosophy is a continuation of the one which developed the Star. The Star which is now being developed is the commercial version of the Altos which have been in use for several years at internal Xerox locations.

Most of the programming is done in Mesa, an extended version of Pascal. The extensions are mainly in those areas where Pascal is weak for system programming.

The Stars were impressive with a good flicker-free display. The software is carefully thought out, and I think a lot can be learned from looking at the design decisions it embodies. In particular, the software shows immense restraint, simplicity, and consistency.


... snip ...

a couple other references:
http://www.parc.xerox.com/about/history/default.html
http://www.computerhistory.org/events/lectures/alto_06042001/
http://www.digibarn.com/friends/alanfreier/index.html
http://www.physorg.com/news110781861.html

Vulnerability Assessment of a EAL 4 system

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Vulnerability Assessment of a EAL 4 system
Newsgroups: comp.os.linux.security,comp.os.linux.misc,comp.os.linux.setup,comp.security.misc,comp.security.unix
Date: Tue, 31 Oct 2006 17:11:33 -0700
Neil Jones <castellan2004-email@yahoo.com> writes:
I am looking at a Linux server which has been accredited as a EAL4 system by IBM. During the assessment, I was looking for standard Linux protections like iptables, ssh etc. On this server, there is no iptables.

Regardless, I would like to know how to evaluate a EAL 4 system. What do you need to look for in the EAL 4 system in production that could become vulnerable?


orange book like stuff ... sort of assumed that everything was a general purpose computer and had to have provisions to handle everything that a general purpose computer might encounter (including various kinds of multi-user sharing). there was somewhat generalized criteria that things were evaluated against.

i've somewhat characterized the change over to common criteria ... as recognizing that not everything is a general purpuse computer (including multi-user sharing) ... and so there are all sorts of provisions in common criteria for specifying the "protection profile" against which something will be evaluated.

there are some general stuff about what kinds of things that need to be in a "protection profile" for different evaluation levels ... but without the specific protection profile ... you have no real idea what specific evaluation has been performed.

it is also possible that there could be security things that you might be interested in doing ... that just weren't considered or included in the protection profile used for the evaluation.

obstensibly one of the purposes of evaluation was so you could compare the evaluation levels of two similar products and use the evaluation to help in the choice ... under the assumption that using the same protection profile would result in comparable evaluations. However, a couple years ago, there was a statement that of the 64 some evaluations that had been performed at that time, something like sixty of the evaluations had non-public deviations from published protection profile (making it difficult to use evaluations as part of comparing similar products)

National Information Assurance Partnership (NIAP) home page
http://www.nsa.gov/ia/industry/niap.cfm
http://www.nsa.gov/ia/business_research/partnerships_with_industry/niap_and_cots_product_evaluations.shtml

The Common Criteria Evaluation and Validation Scheme
http://niap.bahialab.com/cc-scheme/

Common Criteria Portal
http://www.commoncriteriaportal.org/

List of Protection Profiles (against which evaluation are performed)
http://www.commoncriteriaportal.org/public/consumer/index.php?menu=5

under operating systems in the above ... there is

"Multi-level Operating Systems in Medium Robustness Environments PP" protection profile (at EAL4+)
http://www.commoncriteriaportal.org/public/files/ppfiles/PP_SLOSPP-MR_V1.22.pdf

"Multi-level Operating Systems in Medium Robustness Environments" certification report (at EAL4+)
http://www.commoncriteriaportal.org/public/files/ppfiles/PP_VID204-VR.pdf

then there is

"Single-level Operating Systems in Medium Robustness PP" protection profile (at EAL4+)
http://www.commoncriteriaportal.org/public/files/ppfiles/PP_SLOSPP-MR_V1.22.pdf

"Single-level Operating Systems in Medium Robustness PP" certification report (at EAL4+)
http://www.commoncriteriaportal.org/public/files/ppfiles/PP_VID203-VR

whole lot of past posts mentioning risk, fraud, exploits, and vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud

and some number of past posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

Why these original FORTRAN quirks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why these original FORTRAN quirks?
Newsgroups: alt.folklore.computers,comp.lang.fortran
Date: Wed, 01 Nov 2006 12:08:41 -0700
pa@see.signature.invalid (Pierre Asselin) writes:
Probably because of the move to shared libraries. You can't do relocation if different processes map the same code to different addresses.

AFAIK relocation is still used when linking statically.


aka another os/360 characteristic of relocatable adcons that were randomly distributed thru-out code and data.

other systems have had conventions for doing shareable code/libraries such that location specific details were in separate process control structures and that execution convention accessed their location specific information with register-based operations (i.e. registers were part of process and address space specific operation ... as well as the associated process control structures).

I fiddled with this a lot for the original shared segment stuff that I built on top of the paged mapped filesystem that I did for cms in the early 70s (originally on cp67 platform).
https://www.garlic.com/~lynn/submain.html#mmap

most of cms used os/360 derived assemblers, compilers, and linker/loader ... so about the only place you could fiddle all the default relocable address constant conventions was in assembler implemented applications:
https://www.garlic.com/~lynn/submain.html#adcon

vm370/cms picked up a trivially small subset of the implementation for product release as something they called DCSS (discontiguous shared segments) ... the vm370 kernel changes to allow new way of specify shared code ... but at a fixed/common address location .. and cms changes were for code to execute in r/o protected shared storage (but confirming to vm370 kernel changes at fixed location).

Note that DWSS was different than DCSS. DWSS was part of the original technology transfer of system/r from sjr to endicott for sql/ds.
https://www.garlic.com/~lynn/2006t.html#16 Is the teaching of non-reentrant HLASM coding practices ever defensible?

system/r was the original relational/sql implementation, done on vm370 platform
https://www.garlic.com/~lynn/submain.html#systemr

various recent posts this year mentioning DCSS
https://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1
https://www.garlic.com/~lynn/2006.html#13 VM maclib reference
https://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#18 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#28 DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#35 Charging Time
https://www.garlic.com/~lynn/2006b.html#4 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#7 Mount a tape
https://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
https://www.garlic.com/~lynn/2006j.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006m.html#53 DCSS
https://www.garlic.com/~lynn/2006m.html#54 DCSS
https://www.garlic.com/~lynn/2006m.html#56 DCSS
https://www.garlic.com/~lynn/2006n.html#5 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006n.html#14 RCA Spectra 70/25: Another Mystery Computer?
https://www.garlic.com/~lynn/2006n.html#45 sorting
https://www.garlic.com/~lynn/2006o.html#27 oops
https://www.garlic.com/~lynn/2006o.html#53 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006q.html#27 dcss and page mapped filesystem
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006s.html#7 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow (was: Real core)
https://www.garlic.com/~lynn/2006t.html#15 more than 16mbyte support for 370

Encryption and authentication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Encryption and authentication
Newsgroups: comp.security.firewalls
Date: Wed, 01 Nov 2006 12:49:40 -0700
mhostettler writes:
Is anyone could please explain the relationship between authentication and encryption.

Traffic can be authenticated without being encrypted. Is it possible to have encryption without authentication?

I read that, in OSI 17799: "the cryptographic techniques protect the confidentiality, integrity and authenticity of information".

So, it seems that encryption couldn't exist without authentication.


the security "PAIN" acronym

P - privacy (sometimes as CAIN ... confidentiality)
A - authentication
I - integrity
N - non-repudiation

so encryption technology can be used for hiding information, achieving security "privacy" (or "confidentiality").

given that you know that only specific entities have access to a specific encryption keys ... then it can be possible for encryption to also imply authentication (because part of the encryption business process requires that only specific entities have access to the associated encryption key).

so as part of a cryptographic business process, a secure hash of information can be taken and also encrypted. if the recomputed hash of a decrypted message is the same as the decrypted original hash ... then the implication is that the message has not been modified ... providing for integrity.

and example is asymmetric key cryptography technology.

using asymmetric key cryptography technology, a public/private key business process is defined ... where the public key is widely publicized and the corresponding private key is kept confidential and never divulged.

it is possible to take an entity's public key and encode a message. privacy/confidentiality is presumed because the decoding of the message can only be done by the entity with access to the corresponding private key.

a digital signature business process is defined utilizing the public/private key business process.

the secure hash of some information is calculated and encoding with the entity's private key.

a relying party processing the information can recalculate the secure hash and compare it with the original secure hash decoded with the corresponding public key.

from 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


xs the relying party, on matching the two secure hash (in the digital signature business process), can infer that

1) the information has not changed (integrity)

2) something you have authentication, aka the original digital signature was done by an entity with exclusive and unique access to the corresponding private key, which has been kept confidential and never divulged.

if you combine digital signature (for integirty and authentication) with public key encryption of the information (for privacy/confidentiality) ... you can achieve three of the four PAIN characteristics.

note that "N" in PAIN is a lot harder. there is some unfortunate semantic confusion that the term "digital signature" and the term "human signature" both contain the word "signature". there has sometimes been the misbelief that the "digital signature" business process (integrity and authentication) can assume to be equivalent to the "human signature" process which implies that the person had read, understood, agrees, approves, and/or authorizes the information. however there is actually a vast chasm between "digital signature" and "human signature". misc. posts discussing various signature characteristics
https://www.garlic.com/~lynn/subpubkey.html#signature

for other drift ... we were called in to consult with a small client/server startup that wanted to do payments on their server. they had this technology called SSL (or sometimes HTTPS). the resulting payment processing implementation is sometimes now referred to as electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

SSL encryption has been used to hide the credit card account number, preserving its privacy and confidentiality.

the risk is that just divulging the account number can result in fraudulent transactions ... various posts regarding shared-secret based business processes
https://www.garlic.com/~lynn/subintegrity.html#secret
and account number harvesting vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#harvest

... and a little more context from a post about security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

a little later in the x9a10 financial standards working group, the reguirement for the x9.59 standards work was to preserve the integrity of the financial infrastructure for all retail payments
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

part of the standard eliminated the risk associated with divulging the account number (resulting in fraudulent transactions). digital signature business process was used to provide integrity and authentication. furthermore, part of the standard was a business rule that account numbers used in x9.59 retail financial transactions could not be used in non-authenticated transactions.

the account number scenario with SSL is that the planet needs to buried under miles of cryptography for hiding in order to prevent fraudulent transactions (aka enormous amounts of privacy/confidentiality).

in x9.59, the fraud risk related to divulging account numbers is eliminated ... and therefor it is no longer necessary to hide (x9.59) account numbers. In effect, x9.59 manages to substitute integrity and authentication for privacy/confidentiality as countermeasure to account number related fraud (to preserve the integrity of the financial infrastructure for all retail payments, it is no longer necessary to hide the account number).

lots of past posts mentioning threats, exploits, vulnerabilities, and fraud
https://www.garlic.com/~lynn/subintegrity.html#fraud

and misc. posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

recent discussion regarding "naked transactions" ... requirement for enormous amounts of "hiding" (privacy/confidentiality) because any trival leakage of account numbers lead to enormous fraud risk. the alternative is to armor every transaction with digital signature (integrity, authentication) ... eliminating the enormous fraud risk related to even trivial account number leakage:
https://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
https://www.garlic.com/~lynn/aadsm24.htm#7 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#9 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#10 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#12 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#14 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#22 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#25 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm24.htm#26 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#27 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#30 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#31 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#32 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#37 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#41 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#42 Naked Payments II - uncovering alternates, merchants v. issuers, Brits bungle the risk, and just what are MBAs good for?
https://www.garlic.com/~lynn/aadsm24.htm#43 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm25.htm#1 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm25.htm#4 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm25.htm#9 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm25.htm#10 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm25.htm#20 Identity v. anonymity -- that is not the question
https://www.garlic.com/~lynn/aadsm25.htm#28 WESII - Programme - Economics of Securing the Information Infrastructure
https://www.garlic.com/~lynn/aadsm25.htm#38 How the Classical Scholars dropped security from the canon of Computer Science

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Wed, 01 Nov 2006 15:39:09 -0700
Walter Bushell <proto@panix.com> writes:
I've worked in places where that was the most powerful computer, and we had hundreds of programmers.

part of 195 was that it required highly optimized application to reach peak sustained thruput from its pipeline ... about 10mips. for most normal codes, thruput was more like 5mips.

sjr ran a 370/195 (os/360) mvt service thru the late 70s ... and there were jobs that sat in the work queue for several weeks waiting for execution. also because of its highly complicated hardware implementation ... it wasn't practical to add virtual address translation ... which eventually became available for all other 370s (and with the advent of virtual memory on all models of 370s, all the operating systems transitioned to virtual memory).

i got somewhat involved in project to add dual i-stream (multi-threading) to 195; replicate instruction address, registers, etc ... workload in the pipeline would have one-bit tag added indicating which i-stream it belong to. the idea was that if most convential code only kept the pipeline half-full ... then a pair of i-streams could keep the pipeline full and operating at aggregate thruput of 10mips. this never shipped (and it didn't address the issue of retrofitting virtual memory to the machine).

one of the applications that wasn't getting exactly great turn around on the sjr 195 was air-bearing simulation in support of designing the 3380 floating, thinfilm disk heads (possibly an hr every several weeks).

as i've mentioned before ... I got involved in hardening an operating system so that it could be used in disk engineering and product test labs (bldg. 14 & 15). they had a number of processors that were used in dedicated stand-alone mode for doing disk regression testing. because of the high rate of faults from prototype and engineering hardware, they were unable to operate with conventional operating systems
https://www.garlic.com/~lynn/subtopic.html#disk

the labs would get early processor models for dedicated regression testing (including early availability of 4341 and 3033 processors). with the availability of operating system on the labs machine ... we found that normal disk regression testing only required a few percent of processor capacity ... the rest of the processors became available for other types of application.

3033 was about 4.5mips (a little less than half 195 peak sustained). however, the air-bearing simulation work could get several hrs every day on the bldg. 15 3033 ... compared to maybe an hr every several weeks on the bldg. 28 195.

past posts mentioning the air-bearing simulation
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#13 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#6 Google Architecture
https://www.garlic.com/~lynn/2006l.html#18 virtual memory
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Wed, 01 Nov 2006 16:23:49 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
Programmed function keys are stupid. They are well intended, but their key sequences aren't standard across all keyboards (like arrow keys). I used to like them, but they lock you into a particular piece of vendor hardware (thank god (Ken Arnold and the Evans crew) for things like termcap).

I think the best implementation of PF keys were Evans & Sutherland Picture System boxes which were dynamically labelled (really programmed), but these things weren't cheap. Xerox did the right thing for their keyboards leaving them off.


so all the PF keys were software programmable ... although specific applications tended to have specific default operations ... i.e. PF3 always mapped to the same function.

so the "data entry" 3278 started out with PF keys as ALT across the top row.

3278 then introduced new row of PF1-PF12 across the top ... somewhat like PC keyboard.

then you could keyboard with row of PF1-PF12 across the top and PF13-PF24 on the right-hand side ... somewhat where the 3277 PF1-PF12 keypad was located.

so although the PF keys were software programmable ... a lot of applications from 3277 had created embedded assignments for specific PF keys. So with the availability of PF keys on the side of 3278 ... there were several requests for operating system layer to swap the scancodes between PF1-PF12 and PF13-PF24.

There was a separate issue that started earlier with software programmable 3277 PF keys with regard to application consistency. Given that applications could embed specific associations with specific keys. given that there was no controlled specification for the PF keys ... like the PC "page up" and "page down" keys ... different applications bound specific functions to specific keys (and for some reason it took quite a long time to add user configurable profiles to these applications).

in any case, w/o explicit, well established convention (like physical labels on the keys) ... there was extended period of application inconsistent specification for PF key use (aggravated by applications that failed to provide user profile configurability).

There was almost infamous arguments in the late 70s and early 80s about PF3 consistently being used for "END/QUIT/EXIT" function. There were a variety of emerging full-screen applications, application menu environments, email readers, editors, etc. There was some irritability when each environment assigned END/QUIT function to a different PF key (and user profile configurability was not generally available )

from somewhere long ago and far away

Date: 12/11/78 09:02:16
To: wheeler

how about a cp command that would reset pfkeys 13-24 to be the same as pfkeys 1-12 ?????

then on my 3278 i could use the pfkeys in the normal position for any application that used pfkeys 1-12


... snip ... top of post, old email index

Date: 08/11/82 12:44:20
To: wheeler

Hi there. I have had a bug reported to me concerning the assignment of PF13-24 with my VMSG mods. Unfortunately, we don't have many terminals with more than 12 PF keys around here, so that code wasn't tested very well. Sorry 'bout that! Hopefully, I'll have an updated version available soon and I'll be re-distributing the package. I'll also be adding some other updates which people have requested (6 EPILOG/PROLOG lines, for example). If you have any other requests, let me know. It seems that there are various versions of VMSG running around and I'm trying to get a hold of the other enhancements to incorporate into my version. A bientot,

IBM Europe Technical Support
23 Allee Maillasson
92100 Boulogne, France


... snip ... top of post, old email index

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Wed, 01 Nov 2006 17:21:17 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
And no mention of Ira's use of RSCS in the entire post.

re:
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?

lots of past collected posts mentioning bitnet and/or earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

... university based network using similar technology to that used in the internal network.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

while there the number of nodes on the internal network was larger than the number of arpanet/internet nodes ... from just about the beginning until possibly sometime mid-85 ... there have also been claims that the number of bitnet/earn nodes (independent of the internal network nodes) were larger than the number of arpanet/internet nodes for at least part of the early 80s.

post including old email referencing the creation of EARN
https://www.garlic.com/~lynn/2001h.html#65

note that while possibly all the nodes on the internal network were ibm mainframes ... there were some number of bitnet/earn network nodes that had vnet/rscs emulation running on other kinds of processors.

misc. posts from this year mentioning bitnet &/or earn:
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006b.html#12 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#25 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#27 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006j.html#8 ALternatives to EMail
https://www.garlic.com/~lynn/2006j.html#23 virtual memory
https://www.garlic.com/~lynn/2006j.html#43 virtual memory
https://www.garlic.com/~lynn/2006m.html#9 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2006m.html#10 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2006o.html#60 Greatest Software?
https://www.garlic.com/~lynn/2006r.html#5 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe builders?

heavily edited email ...

Date: 12/12/81 11:25
To: wheeler

.. is the guy that Fuchs (CUNY - BITNET) wrote to about connecting BITNET to VNET.. XXXXX is driving the BITNET connection with YYYY and Fuchs) but they need some resources.. Also, YYYY (works in ZZZZ) also just recently initiated a contract with Univ. of Wisconsin for putting Internet/TCP protocol on VM/370


... snip ... top of post, old email index, HSDT email

then there was also NSF funded CSNET in the early 80s (independent of the heavily corporate funded bitnet and earn):
CSNET (Computer Science NETwork) is funded by NSF, and is an attempt to connect all computer science research institutions in the U.S. It does not have a physical network of its own, but rather is a set of common protocols used on top of the ARPANET (Department of Defense), TeleNet (GTE), and PhoneNet (the regular phone system). The lowest-cost entry is through PhoneNet, which only requires the addition of a modem to an existing computer system. PhoneNet offers only message transfer (off-line, queued, files). TeleNet and ARPANET allow higher-speed connections and on-line network capabilities such as remote file lookup and transfer on-line, and remote login.

and misc. posts mentioning CSNET
https://www.garlic.com/~lynn/internet.htm#0 Internet and/or ARPANET?
https://www.garlic.com/~lynn/98.html#59 Ok Computer
https://www.garlic.com/~lynn/internet.htm#4 Internet (TM) and USPTO
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
https://www.garlic.com/~lynn/2002p.html#39 20th anniversary of the internet (fwd)
https://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004o.html#47 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005e.html#46 Using the Cache to Change the Width of Memory
https://www.garlic.com/~lynn/2005n.html#16 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2006j.html#49 Arpa address
https://www.garlic.com/~lynn/2006k.html#3 Arpa address

1960s railroad data processing on L&N

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1960s railroad data processing on L&N
Newsgroups: misc.transport.rail.americas,alt.folklore.computers
Date: Wed, 01 Nov 2006 19:42:25 -0700
John McCoy <igopogo@ix.netcom.com> writes:
Not really. Back in the batch era generally the "application" programmer just assumed all the data was on tape. A "front end" application was responsible for collecting up the data (reading cards or whatever) and putting it on the tape.

What you describe is more true of the early days of time-sharing, in the mid-late 70's.


ctss, multics, cp67/cms ... 60s ... at lot of this is in melinda's vm370 history
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

there were at least two commercial timesharing service bureaus formed using cp67 in the 68/69 timeframe ... which somewhat later moved to vm370 ... and in the early 70s, tymshare started offering commercial vm370 service. misc. postings on commercial timesharing
https://www.garlic.com/~lynn/submain.html#timeshare

a little more on one of the early cp67 timesharing service bureaus
https://www.garlic.com/~lynn/2006k.html#35

and also references here to series of URLs at the computer history site
https://www.garlic.com/~lynn/2006k.html#36

and pointer here to some tymshare history offering vm370 commercial timesharing starting in the early 70s
https://www.garlic.com/~lynn/2006k.html#37

there is the multics web site
https://www.multicians.org/multics.html

cp67 had been done at the cambridge science center on the 4th flr of 545 tech. sq
https://www.garlic.com/~lynn/subtopic.html#545tech

while multics was on the 5th flr of the same building. both cp67 and multics trace some amount of history back to ctss.

To RISC or not to RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: To RISC or not to RISC
Newsgroups: alt.lang.asm,comp.arch
Date: Wed, 01 Nov 2006 19:57:32 -0700
Joe Seigh <jseigh_01@xemaps.com> writes:
There isn't. I worked on VM/XA which was written in assembler but really only a few percent of it actually needed to be in assembler. Mostly for non standard linkage type things like innterrupt handlers and context switching. There were some performance sensitive parts but not as much as you might think.

this discusses taking a major portion of the vm370 kernel (all written in assembler), the spool file subsystem, moving it into a virtual address space (service virtual machine using the nomenclature of the period), rewriting it in vs/pascal ... and making it run substantially faster ... in part to try and increase thruput by two orders of magnitude.
https://www.garlic.com/~lynn/2006s.html#7 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006s.html#25 VM SPOOL question

this was done in part for trying to speed up the thruput of vnet nodes in HSDT (high-speed data transport)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

vnet made heavy use of the vm spool file system ... and aggregate thruput could be limited to five 4k blocks per second (20kbytes/sec). as noted, HSDT need several hundred thousand to multiple million bytes/sec thruput.

... and for something completely different ... in the very early days of rexx, i wanted to demonstrate that rexx wasn't just another command scripting language (like exec and exec2). the dump/fault analysis application (ipcs) was a large assembler implemented application. I set out to demonstrate that I could re-implement the ipcs function working half time over three months in (intepreted) rexx and it would run ten times faster and have ten times as much function (as the original application implemented in assembler). misc. past posts referencing the activity
https://www.garlic.com/~lynn/submain.html#dumprx

To RISC or not to RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: To RISC or not to RISC
Newsgroups: alt.lang.asm,comp.arch
Date: Wed, 01 Nov 2006 21:12:38 -0700
Joe Seigh <jseigh_01@xemaps.com> writes:
People haven't caught onto service virtual machines yet. I've been trying to talk people into moving their net security / firewall stuff into a guest machine which would emulate a virtual network interface for another guest. This would have the advantage of being more secure than running the software as an application since if the other guest gets compromised, the security stuff retains its integrity. Also it has the advantage of not being locked out of guest OS at the whim of the OS vendor. You would think it's a no brainer. Somebody will do it. It just won't be the current players in the business.

I once moved a network simulator into a guest virtual machine and mapped the 370 channel i/o over vmcf, a sort of hacked virtual channel device.


re:
https://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC

how 'bout virtual appliances ... possibly not quite what they are talking about in this news release ... instead of ease of deployment ... talk about partitioning, security, isolation, ease of management, etc.
http://triangle.dbusinessnews.com/shownews.php?newsid=96829&type_news=latest

similar but different post here mentioning padded cell
https://www.garlic.com/~lynn/2006s.html#65 Paranoia..Paranoia..Am I on the right track?.. any help please?

a virtual appliance home page ... again similar but different ... but can be used and deployed akin to demons ... but using virtual machine isolation and partitioning infrastructure
http://virtualappliances.net/

virtual appliances blog
http://virtualappliances.org/

a relatively recent article: Excited about virtual appliances
http://www.networkworld.com/columnists/2006/082806gearhead.html

Hundreds of Free Virtual Appliances
http://www.digitalmediaminute.com/article/2245/hundreds-of-virtual-appliances

... for all intents and purposes ... virtual appliances and service virtual machines are nearly identical concepts.

To RISC or not to RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: To RISC or not to RISC
Newsgroups: alt.lang.asm,comp.arch,alt.folklore.computers
Date: Wed, 01 Nov 2006 22:01:19 -0700
re:
https://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC
https://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC

reference to old email about "special message" (SPM) ... which was a superset of both vmcf and iucv ... and predates both having been done originally on cp67:
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history

misc. other posts mentioning vmcf
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2005k.html#59 Book on computer architecture for beginners
https://www.garlic.com/~lynn/2005n.html#45 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/2006m.html#54 DCSS

and for the heck of it, from somewhere long ago and far away ... note this email is somewhat discussing the same subject as the email from
1985 included in the referenced post "other cp/cms history":

Date: 10/16/78 08:35:03
From: wheeler

VMCF communication requires a program and the knowledge on the senders part whether or not the receiver is a program and/or a person. The simplest example is the use of the VNET facility to do CMDs on other real machines (i.e. q system). I very often wish to know how does a particular node route information. The information back from VNET will in general by asychronous (unless you wish to wait possibly a very long time for the response). There have been several people who have written programs to do the same thing automatically in order to create an up to date structure of the network. The only way to do that with the product change is to have VNET determine (possibly by how fast the original characters were typed) whether the originator is a person or a program.

I re-iterate; untill IBM supplies it in a architecturly clean implementation with all the facilities available, there will be a demand for an interface which is allowed to capture all information in a program which would be destined for a person (also Yorktown still seems to think if they don't have the application then it either it must not exist or 'who in the world would ever want to do that').


... snip ... top of post, old email index, HSDT email

To RISC or not to RISC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: To RISC or not to RISC
Newsgroups: alt.lang.asm,comp.arch,alt.folklore.computers
Date: Wed, 01 Nov 2006 22:42:23 -0700
re:
https://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC

note the old
1978 email reference to automated programs (that does a lot of automated queries, and on the basis of the responses, does new queries) to create an up-to-date structure of the network is quite analogous to the process used by modern day web crawlers.

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Wed, 01 Nov 2006 23:11:27 -0700
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
RSCS was the external product name, VNET was the internal product; there may have been some overlap at times, just as there was between Lynn's CP and various commercial releases of VM products. ;^> Had IBM shenanigans not prevented the release of the internal (dare I say "research" a la DMR et al) product, who knows if MVS, Unix, Windows would be around? IMHO IBM's biggest mistakes have not been in their execution of their business, but in not allowing their research fellows to productize the hell out of their labours a la 3M.

actually the external product name went thru a number of iterations ... back and forth between RSCS and VNET ... as referenced in this old email from 1985 included in this post
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history

aka "the internal version of RSCS" being eventually released as the VNET PRPQ.

at some point VNET was used to describe both the internal networking software and the internal network itself
https://www.garlic.com/~lynn/subnetwork.html#internalnet

as in this post
https://www.garlic.com/~lynn/2006t.html#43 The Future of CPUs: What's After Multi-Core?

that includes part of an email from
1981 that references a request from CUNY to interface BITNET to VNET (where bitnet is the university network running RSCS software and VNET is both the internal software and the internal network)

in the
1985 email referencing the release of the VNET PRPQ in the mid-70s, there was enormous objections to allowing the VNET PRPQ to be announced and shipped. you would probably never believe the tortuous process and logic that eventually led to the company allowing VNET PRPQ to be shipped.

in any case, this subthread is something of a repeat of a very similar thread/exchange in 2002
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable

including references to bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

the referenced posting from 2002 contains a section from Melinda's vm history about some of the early rscs/vnet (software) history
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

The Future of CPUs: What's After Multi-Core?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Future of CPUs: What's After Multi-Core?
Newsgroups: alt.folklore.computers
Date: Thu, 02 Nov 2006 00:04:05 -0700
Anne & Lynn Wheeler <lynn@garlic.com> writes:
in any case, this subthread is something of a repeat of a very similar thread in 2002
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable


re:
https://www.garlic.com/~lynn/2006t.html#49 The Future of CPUs: What's After Multi-Core?

and some other posts in the similar thread/exchange from 2002
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#21 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#22 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#23 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#26 DEC eNet: was Vnet : Unbelievable

the above post references a (2002) post by morris that includes:


A listing I have from 1985 -- the accuracy of which I cannot guarantee --
shows the following node counts:

BITNET    435
ARPAnet  1155
CSnet     104 (excluding ARPAnet overlap)
VNET     1650
EasyNet  4200
UUCP     6000
USENET   1150 (excluding UUCP nodes)

... snip ...

i believe the number of nodes on apranet/internet passed the number of vnet nodes sometime later in 1985.

majority of the following email went into lengthy detail about encryption and DES ... but concludes with the ending sentence.

Date: 06/25/85 19:59:03
From: wheeler
re: hsdt des;
...
This encryption technique is attractive for the internal network with the number of nodes approaching 2000 and the desirability of end-to-end encryption.


... snip ... top of post, old email index, HSDT email

there was extensive use of link encryptors for lines that left corporate physical facilities (but not much use of end-to-end encryption). i have some memory of a claim in this period that the internal network had over half of all the link encryptors in the world.

the following is from (somebody's) old posting to info.nets from 18feb89


>   Does anyone have a current table of size estimates for the academic
>   and research networks?
>
>   Network   as of     count Description
>   --------  --------  ----- -----------------------------------------------
>   BITNET    01/18/85    435 University/nonprofit/research network
>   Arpanet   01/22/85   1155 DoD related


The December 1988 BITNET nodes file contains 2691 entries. This includes BITNET/NETNORTH/EARN nodes.
... snip ...

google net.mail posting from 11aug85 giving list of 694 BITNET nodes as of 12jul85
http://groups-beta.google.com/group/net.mail/browse_thread/thread/1d703f2904d9ace0/551c1d15a1ab71d4?lnk=st&q=&rnum=21&hl=en#551c1d15a1ab71d4




previous, next, index - home