List of Archived Posts

2005 Newsgroup Postings (06/01 - 06/11)

private key encryption - doubts
The 8008
Interesting P4 pipeline article ("replay")
Request for comments - anti-phishing approach
private key encryption - doubts
The 8008
private key encryption - doubts
TSO replacement?
TSO replacement?
Determinstic OpenSSL signature
Request for comments - anti-phishing approach
The 8008
Performance and Capacity Planning
Performance and Capacity Planning
The 8008
The 8008
Performance and Capacity Planning
Performance and Capacity Planning
Repository for digital certificates
Performance and Capacity Planning
Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY
Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY
Public disclosure of discovered vulnerabilities
Public disclosure of discovered vulnerabilities
Public disclosure of discovered vulnerabilities
IBM Plugs Big Iron to the College Crowd
IBM Plugs Big Iron to the College Crowd
Public disclosure of discovered vulnerabilities
NASA Discovers Space Spies From the 60's
IBM Plugs Big Iron to the College Crowd
IBM Plugs Big Iron to the College Crowd
Three-Address RISC, and Really Fast Addressing
IBM Plugs Big Iron to the College Crowd
IBM Plugs Big Iron to the College Crowd
IBM Plugs Big Iron to the College Crowd
IBM Plugs Big Iron to the College Crowd
A second look at memory access alignment
A second look at memory access alignment
virtual 360/67 support in cp67
A second look at memory access alignment
A second look at memory access alignment
TSO replacement?
Public disclosure of discovered vulnerabilities
A second look at memory access alignment
A second look at memory access alignment
Where should the type information be?
Public disclosure of discovered vulnerabilities
Where should the type information be?
Public disclosure of discovered vulnerabilities
Public disclosure of discovered vulnerabilities
virtual 360/67 support in cp67
Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Banks
Banks
Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Ancient history
Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Ancient history
Ancient history
More on garbage collection
The 8008
More on garbage

private key encryption - doubts

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: private key encryption - doubts
Newsgroups: comp.security.ssh
Date: Wed, 01 Jun 2005 09:59:12 -0600
"Yogee" writes:
Alice creates a one-way hash and encrypt it with her private key ->sends it to bob. Bob decrypts it with public key (available every where i guess). So Bob now knows the private key of Alice. So its no longer private.

Whats the trick in this. I think one can encode a message using public key but only one having private key can decript it.

I need to know what exactly is a digital signature. A one-way hash and digest.


asymmetric cryptography uses a pair of keys, what one encodes, the other decodes (as opposed of symmetrical cryptography that uses the same symmetric key for both encryption and decryption).

there is a business process that defines one of asymmetric cryptography key pair as "public" and makes it generally available, the other of the key pair is designated "private" and consistantly kept confidential and never divulged.

in the digital signature business process ... a hash of a message is computed and encoded with the private key. the message and the digital signature (encoded hash) is transmitted. the recipient recomputes the hash of the message and decodes the digital signature with the (corresponding) public key and then compares the two hashes. if the two hashes are the same, then the recipient

1) knows the message hasn't been modified in transit

2) implies something you have authentication about the originator, aka that the originator has access to and use of the corresponding private key

from 3-factor authentication paradigm
something you have
something you know
something you are


verification of a digital signature implies something you have authentication regarding the originator ... that they have access to and use of the corresponding private key.

in the purely digital signature verification case, the actual message isn't necessarily encrypted/encoded. when there is a requirement to also encrypt/encode the message ... frequently a randomly generated symmetric key is created and actually encrypts the message (symmetrical key encryption tends to be much more efficient than asymmetrical key encryption). In the situation for both encrypting and digitally signing the message ... the originator transmits the encrypted message (with the randomly generate symmetric key), the symmetric key encoded with the recipients public key, and the digital signature (hash encoded with the originator's private key). The recipient can verify the digital signature with the originator's public key. The recipient then can decode the symmetric key (which has been encoded with the recipient's public key) with their private key. Having decoded the symmetric key with their private key, the recipient can use the decoded symmetric key to decrypt the actual message.

fips186-2 has DSA and ECDSA
http://csrc.nist.gov/cryptval/dss.htm

which is definition of public/private key pairs for digital signature process only. in DSA/ECDSA, the hash and a randomly generated number are combined and encoded, resulting in two values (two consecutive digital signatures encodings of the same message will result in different digital signature values) ... think of it as somewhat two equations in two unknowns. DSA/ECDSA have been vulnerable to weak random number generators which can expose the private key (quality random numbers generation is essential to DSA/ECDSA digital signature operation).

in the past, most hardware tokens have had very inadequate random number generators. as a result you've seen such hardware tokens deployed with RSA-based digital signature for authentication. The RSA key pairs would be generated externally (using quality ranodm number generator) and injected into the hardware token. With the advent of hardware tokens with quality random number generation ... you can use the random number generator for both on-token key pair generation as well as DSA/ECDSA digital signature operations.

shared-secrets and pin/password based authentication systems
https://www.garlic.com/~lynn/subintegrity.html#secrets

have the issue that the originator and the recipient both have access to the same value ... and therefor the recipient can also use the value to impersonate the originator. as a result, you tend to have a requirement that such infrastructures require a unique pin/password/key for every unique security domain. As a result there is a huge proliferation in the number of shared-secrets that an individual has to remember.

In the digital signature scenario, only the value that is used to verify the digital signature is ever divulged (which can't be used to impersonate the originator). The private key which is used to originate digital signatures is never divulged and is only available to the specific originator (the vulnerability is any compromise of the originator's private key).

there has been a lot published PKI, digital certificate based public/private key business processes.

there is a lot less published on certificate-less public/private key business process
https://www.garlic.com/~lynn/subpubkey.html#certless

although typical ssh deployments of public/private key are certificate-less based.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: Wed, 01 Jun 2005 11:44:06 -0600
forbin@dev.nul (Colonel Forbin) writes:
Sadly, my main well developed professional skill is making computers and the humans who produce and support them run fast and that isn't in great demand for some reason. I can't recall ever working at a place where the management had any sense of something remotely resembling vision, including HP. OTOH, I'm not a bad manager, I just don't enjoy it. If Seymour Cray had me at his side (well, probably more likely kicking his ass) at CCC, he wouldn't have bet the farm on the Cray-4 in a bad market, would have outbid HP on the Convex acquisition, and might not have died like the SUV version of James Dean because we would likely have been sitting in the treehouse at Megatest smoking a doobie and reflecting on our success.

a couple months after i graduated and joined the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
(... this was in the era of rapid corporate expansion ... and they were constantly trying to promote current employees into management ... to handle the new crop of incoming) .... i asked to read the manager's manual (3in thick, 3ring binder with lots & lots of stuff in it). i then explained that i had worked various kinds of construction jobs as a kid ... and my first summer in college was forman on project with crew of 27-30 people. i mentioned that my "training" and background for managing/motivating people seemed to be inconsistant with what was laid out in the manager's manual.

that sort of put an end to promoting me to manager ... i never was one ... although i was given a lot of opportunity to provide technical direction.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Interesting P4 pipeline article ("replay")

From: Anne & Lynn Wheeler  <spamtrap@crayne.org>
Subject: Re: Interesting P4 pipeline article ("replay")
Newsgroups: comp.lang.asm.x86,comp.arch
Date: Wed, 1 Jun 2005 17:25:43 +0000 (UTC)
Terje Mathisen writes:
This is actually quite well-known in the literature and in comp.arch, it is called data speculation: The instruction scheduler tries to guess when a given data item will become available, and schedules the next instruction to start at that very moment.

in the high-end 360/370s horizontal microcoded engines ... it was a little harder, the programmer had to count the cycles and not make reference to value in registers until the appropriate number of cycles had passed (after fetch or other asynchronous operation).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Request for comments - anti-phishing approach

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Request for comments - anti-phishing approach
Newsgroups: comp.security.misc
Date: Wed, 01 Jun 2005 19:50:07 -0600
"douglas.ross@gmail.com" writes:
Here's a refined version of the anti-phishing approach, with screenshots. It revolves around making the phisher solve a captcha-like problem:

some recent postings in the cryptography mailing list on phishing vulnerabilities and countermeasures:
https://www.garlic.com/~lynn/aadsm19.htm#17
https://www.garlic.com/~lynn/aadsm19.htm#18
https://www.garlic.com/~lynn/aadsm19.htm#19
https://www.garlic.com/~lynn/aadsm19.htm#20
https://www.garlic.com/~lynn/aadsm19.htm#21
https://www.garlic.com/~lynn/aadsm19.htm#22
https://www.garlic.com/~lynn/aadsm19.htm#23
https://www.garlic.com/~lynn/aadsm19.htm#24
https://www.garlic.com/~lynn/aadsm19.htm#25
https://www.garlic.com/~lynn/aadsm19.htm#26
https://www.garlic.com/~lynn/aadsm19.htm#27

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

private key encryption - doubts

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: private key encryption - doubts
Newsgroups: comp.security.ssh
Date: Wed, 01 Jun 2005 22:59:02 -0600
"Yogee" writes:
How is the symmetric key generated? What algorithms are used? Suppose I finally got the symmetric key decided by the originator. Do I have to know how the key was created for continuing the encryption or its just a key that will be used for encryption. Is the ALGO for encrypting be same for both the parties.

What is the basic difference between RSA/DSA? Actually I did ssh without password. I was able to do it successfully but didnt understood how it happened. So I tried to dig into it.


... you are now past the overview stage and into read-the-manual stage. i already provided the URL for nist fips182 ... which goes into detail on dsa, ecdsa, and rsa digitlal signatures (and lots of additional details).

you should now read the detailed documentation.

with quicky use of search engine ... here is openssh
http://www.openssh.com/

here is reference for openSSL
http://www2.psy.uq.edu.au/~ftp/Crypto/

SSL2 from the netscape web site
http://wp.netscape.com/eng/security/SSL_2.html

we were aksed to consult with this small client/server startup in silicon valley that wanted to do payment transactions on their "commerce" server. in the year we worked with them, they moved to mountain view and changed their name. trivia question ... who had the rights to their new name at the time? also what corporation was providing substantial amount of funding for the commerce server?

minor posting drift with regard to the above period
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

SSL3 from the netscape web site
http://wp.netscape.com/eng/ssl3/ssl-toc.html

for TLS internet standards (aka standards version of SSL) ... there is my rfc index

https://www.garlic.com/~lynn/rfcietff.htm

in the RFCs listed by section, click on Term (term->RFC#)

in the Acronym fastpath, select "TLS"
transport layer security (TLS )
see also encryption , security
3943 3788 3749 3734 3546 3436 3268 3207 2847 2830 2818 2817 2716 2712 2595 2487 2246


clicking on the rfc number brings up the RFC summary. clicking on the ".txt=nnn" field in a RFC summary fetches the actual RFC.

with a little more searching you should be able to find detailed descriptions of the various kinds of asymmetric cryptography and the basic fundamentals involved (for instance ... you might start looking for stuff about RSA at the RSA web site ... also certicom is possibly a place to start looking for details about elliptical curve (asymmetrical) cryptography.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: Thu, 02 Jun 2005 07:38:32 -0600
forbin@dev.nul (Colonel Forbin) writes:
Lynn, you are a national treasure.

long ago and far away in another life/galaxy ... datamation had an article claiming something about me being worth my weight in gold (not sure who ... hasn't been for me) ... anyway sounded better than national treasure.

some were blaming me for something that was being called *tandem memos* (sort of an early mailing list, newsgroup thing ... about 25 yrs ago).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

private key encryption - doubts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: private key encryption - doubts
Newsgroups: comp.security.ssh
Date: Thu, 02 Jun 2005 07:42:28 -0600
"Yogee" writes:
I wanted to know about some open source projects, small ones which I can study & if possible contribute something.

It would be of great help. Just wanted to know more. You guys seem to be very much into this.


some history on charging for software (it used to be that almost all ... or at least large amounts of software was open source):
https://www.garlic.com/~lynn/2005g.html#51
https://www.garlic.com/~lynn/2005g.html#53
https://www.garlic.com/~lynn/2005g.html#54
https://www.garlic.com/~lynn/2005g.html#57

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TSO replacement?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSO replacement?
Newsgroups: bit.listserv.ibm-main
Date: Thu, 02 Jun 2005 12:42:25 -0600
bblack@ibm-main.lst (Bruce Black) writes:
I think Ed has overstated it. TSO is stabilized but supported, not dead. And ISPF is still being enhanced (although few enhancements in recent years). --

guess who used to pay for ISPF development? it is an inside joke. when every business operation had to show a profit ... a "strategic" product that couldn't cover its own bills (customers would nver pay what was actually necesary to cover the run rate of the operation doing the development) would be combined with some real profit making product(s) ... so the overall business operation showed a profit. at one point ISPF was frequently held up as a prime example.

somewhat related to change-over to priced software that was permeating the industry
https://www.garlic.com/~lynn/2005g.html#51
https://www.garlic.com/~lynn/2005g.html#53
https://www.garlic.com/~lynn/2005g.html#54
https://www.garlic.com/~lynn/2005g.html#57

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TSO replacement?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSO replacement?
Newsgroups: bit.listserv.ibm-main
Date: Thu, 02 Jun 2005 13:50:58 -0600
howard@ibm-main.lst (Howard Brazee) writes:
Programmers are used to persuading our companies that they need to pay for projects that individual customers are not willing to pay for.

If people in charge of making purchases of systems believe TSO is dead, then they will be more likely to go with systems that they perceive having a future. Which means IBM doesn't sell as many systems, and our skills have less value.


this was picking up steam on in the mid to late 70s ... stuff deemed strategic like ISPF ... previously could just be done because somebody decided it should be done. in the mid to late 70s ... some number of products got put on sound business footing with profit, cost management, etc. this was extremely difficult transition for some number of product operations that hadn't been used to operating under such constraints. thus was born the co-mingling gimmick for some number of products.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Determinstic OpenSSL signature

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Determinstic OpenSSL signature
Newsgroups: sci.crypt
Date: Thu, 02 Jun 2005 15:27:44 -0600
bmoeller@acm.org (Bodo Moeller) writes:
Other examples: Apart from SSL, you must expect to find this kind of padding in X.509 certificates using RSA signatures (it's not the only scheme in use, but very common). Also RSA signing in PGP uses it.

one of the somewhat legacies of the 90s has been use of RSA in infrastructures that might consider deploying hardware tokens (as a integrity mechanism). the tokens from the era typically had very poor random number capability ... which is required for key generation ... but also required by dsa and ecdsa for every signature generated.

In the RSA scenario, the infrastructures could implement token key injection with the key generation happening on an external device with reasonable random number capability.

there are some peculiarities of dsa/ecdsa signatures (because of the random number) vis-a-vis RSA ... if somebody signed the same exact data multiple times ... all the signatures would be different (and non-deterministic)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Request for comments - anti-phishing approach

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Request for comments - anti-phishing approach
Newsgroups: comp.security.misc
Date: Thu, 02 Jun 2005 19:27:05 -0600
Barry Margolin writes:
It says New Hampshire, but I'm in Massachusetts. Better than New Jersey, but still pretty far off.

I guess they have a list of the regions that Comcast has assigned portions of their address space to, so they know this block is for New England. Durham is probably where some regional headquarters are.


and this is w/o even trying hard ... between the zombie armys that could possibly be used as flavor of proxy/redirection ... and rerouting traffic thru a remote dialup POP ... the geographic discrepancies could get really significant

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.computers
Date: Fri, 03 Jun 2005 07:58:54 -0600
Alexander Schreiber writes:
During my service time, I've had people explain to me that they wanted to be tankers because of the supposedly pretty clear-cut outcomes of battles: you either survive and get out pretty much unharmed or if the tank gets killed you die pretty quick. No being dragged from the battle field with body parts missing, a handfull of shrapnel in your flesh, almost but not quite dying to survive as a ruined wreck of a man crippled in war.

one of boyd's analysis
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

was the sherman/tiger duing ww2. he claimed that the tiger had something like 5:1 kill ratio over the sherman ... but there was conscious decision to stay with the shermans because they could manufactur, crew, and deploy ten times as many ... more than making up any 5:1 losses.

i had an uncle who lied about his age and went into army early and was deployed tank maint. crew for effectively the whole period of the US involvement in that conflict in europe.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Performance and Capacity Planning
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 03 Jun 2005 08:05:44 -0600
p_hanrahan@ibm-main.lst (Paul Hanrahan) writes:
Anyone want to discuss LSPR, MIPS, RMF and SMF ? I found a free mips msu chart at :

http://www.isham-research.co.uk/mips_z990.html


lots of past postings on performance tuning and management also evolving into stuff they started calling capacity planning
https://www.garlic.com/~lynn/submain.html#bench

the HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

eventually grew into worldwide field, sales, and marketing support. one of the applications was an analytical model called the performance predictor. salesman could take customer configuration, workload, and performance data and feed it into the performance predictor and ask what if questions about the effect of changing hardware and/or workload.

of course, there were also the configurators ... which became prerequisite for a salesman to even create a customer hardware order.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Performance and Capacity Planning
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 03 Jun 2005 12:31:09 -0600
Eric-PHmining@ibm-main.lst (Eric Bielefeld) writes:
Two things.

1. What is a NUMA machine. How does being NUMA affect performance or MIPS values.

2. I still think that MIPS is a good measure of performance, if used within the same line of computers. If you compare the z/890, or z/990 models, MIPS gives a good relative value of what to expect. If you compare them to a 9672, it probably isn't as relevant. The fact the IBM, when announcing new hardware, usually prints MIPS in their charts says that it is still a valid number. I'm not saying it is best, or that there isn't problems with MIPS, but the MIPS numbers still make more sense to me than any other indicators.

Eric Bielefeld P&H Mining Equipment


NUMA is non-uniform memory architecture.

basically take a small CEC (say one to four processor board) with its own private memory. then create a second order memory bus that interconnects multiple of the memories of multiple of such CECs in a uniform memory addressing architecture.

the latency for a CEC to get at its local memory is much shorter than the latency to get at memory located at a different CEC. the addressing is uniform ... but the latency is different.

this is somewhat analogous to expanded store on the 3090 ... where it was the same memory technology but futher away and higher latency ... however in the 3090, it wasn't uniform addressing ... local memory and expanded store used different memory addressing and explicity software management.

from an operational standpoint ... it somewhat harks back to LCS from the 360 days. You could get 8mbytes of 8mic letency memory from Ampex (and other vendors) that attached to various 360 models. (they were found on some number of 360/65s and 360/75s where the normal memory latency was .75mics ... 750ns).

in the late 80s time-frame and early 90s ... about the same time LLNL was pushing serial fiber channel into becoming a standard, people at SLAC was pushing SCI into being a standard. FCS & SCI used similar fiber hardware technology .... however, SCI developed architectures for asynchronous bus operation over fiber links (rather than just simply asynchronous i/o operation over fiber links). SCI asynchronous memory bus architecture supported 64 ports.

Convex used SCI and HP two-processor snake boards to build 128-way system. Sequent and DG used SCI and pentium 4-processor boards to build 256-way systems.

Convex evolved a version of MACH (from CMU) for support the 128-way Exampler ... and supported some of the same thruput trade-off decisions that should be familiar to people doing LCS support in 360 days (did you execute directly in LCS ... or copy to higher-speed storage for execution). Some of NUMA thruput issues are analogous to cache considerations ... thruput is affected by latency.

Sequent enhanced their Dynix UNIX system that they had developed for supporting up to 32-way "uniform" SMP operation.

Since that time, HP absorbed Convex, DG has gone away (there is some legacy of their disk raid products around) and IBM has bought Sequent.

We were somewhat involved in both of these fiber efforts (fiber channel standard and SCI).

random past fcs, sci, &/or numa posts:
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#17 X.25 lost out to the Internet - Why?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001l.html#16 Disappointed
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#42 Looking for Software/Documentation for an Opus 32032 Card
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002j.html#78 Future interconnects
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2002p.html#8 Sci Fi again
https://www.garlic.com/~lynn/2002p.html#30 Sci Fi again
https://www.garlic.com/~lynn/2002q.html#6 Sci Fi again was: THIS WEEKEND: VINTAGE
https://www.garlic.com/~lynn/2002q.html#8 Sci Fi again was: THIS WEEKEND: VINTAGE
https://www.garlic.com/~lynn/2003b.html#2 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#39 Flex Question
https://www.garlic.com/~lynn/2003j.html#65 Cost of Message Passing ?
https://www.garlic.com/~lynn/2003p.html#16 Star Trek, time travel, and such
https://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
https://www.garlic.com/~lynn/2004c.html#37 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2004.html#0 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
https://www.garlic.com/~lynn/2005h.html#7 IBM 360 channel assignments
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005.html#38 something like a CTC on a PC
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005.html#48 [OT?] FBI Virtual Case File is even possible?
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.computers
Date: Fri, 03 Jun 2005 12:36:32 -0600
rpl writes:
Shermans were *much* more maneuverable (and numerous) than Tigers; I don't think that figure is based on Sherman/Tiger confrontations which would be the ideal metric, but possibly "total armour kills", calculated by number of enemy armour kills per Tiger vs number of enemy armour kills per Sherman.

one comment was that tiger could take out a sherman ... hitting it just about anyplace ... but the sherman couldn't take out a tiger front-on ... sherman was pretty much limited to shooting a tiger in the back from behind to be effective (relative size of the guns and the placement and amount of armor).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.computers
Date: Fri, 03 Jun 2005 13:38:54 -0600
boyd did get something of his way on the tank use subject ... winning out with the battle plan for desert storm (I). rumor is that a quote about a problem going into desert storm (II) was that boyd had died during the interval.

my boyd related posts
https://www.garlic.com/~lynn/subboyd.html#boyd

one of my first boyd postings ... although i had met him many years earlier
https://www.garlic.com/~lynn/94.html#8

it mentions a US News & Report article about John during desert storm "The Fight To Change How America Fights" (6May1991) with minor reference to the new crops of maj. & cols. (that may have heard him in some war college class or another) as his jedi knights.

other boyd related posts from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2

some specific from above: How Col. John Boyd Beat the Generals
http://www.insightmag.com/media/paper441/news/2002/09/02/National/How-Col.John.Boyd.Beat.The.Generals-260509.shtml Don't take John Boyd's name in vain
http://www.enterstageright.com/archive/articles/0503/0503boyd.htm
Col. John R. Boyd, USAF (ret.) died in West Palm Beach Florida on Sunday, 9 March 1997.
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Performance and Capacity Planning
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 03 Jun 2005 14:24:39 -0600
Eric-PHmining@ibm-main.lst (Eric Bielefeld) writes:
MIPS have been around a lot longer than MSUs. They are more familiar to me, which is why I like them. I still remember my 158 with an attached processor. The main CPU alone gave 1 MIP, and with the AP, it was 1.8 MIPS. Back in 1980, that was a good sized machine.

ok, nominal 158 was nominally one mip machine based on various kinds of avg. workload mixes and avg. measure cache hit/miss ratios.

ibm two-way processors were extremely strict memory coherency architecture .... most of the numa architectures had slightly more relaxed memory coherency architecutre ... another posting from this thread:
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning

in any case, when IBM did two processor smp (on 370 cache machines) ... they slowed each processor down by 10% to allow for cross-cache chatter latency (used to maintain strong memory chherency). There might be additional cache degradation if the caches were actually broadcasting cross cache invalidates. As a result ... an 370 two-processor system was rated at basic 1.8 of a single processor system (to account for the minimum, basic delays for handling cross-cache chatter). There may be additional hardware degradation from actually running two caches spitting cross-cache invalidates back & forth at each other. Then there was most SMP kernels had additional cross-cpu chatter protocols which further limited workload thruput. Actual workload thruput on a two-process 370 smp could be expected to be 1.5 times or less than that of a one processor.

so i did a lot of work on doing SMP kernel support and one of my early test machines was a production 158AP at the consolidated US HONE complex in cal (when 158AP was first announced and shipped)
https://www.garlic.com/~lynn/subtopic.html#hone

... HONE was by then the worldwide field, sales, and marketing support vehicle.

I did some slight of hand in the SMP support ... and (from hardware monitor) was getting about 1.5MIPS out of one processor and 0.9MIPS out of the other processor (or a 2.4 aggregate MIPS). Some of the slight of hand was to schedule various parts of the workload so it ran for longer consecutive periods ... as a result had higher cache hit rate ... and therefor higher MIP rate. That was coupled with hiding and/or making the kernel cross-processor chatter almost non-existent ... so the effective workload thruput characteristics on a SMP was very close to a UP kernel workload thruput (drastically minimizing kernel overhead for operating multiprocessor configuration). lots of SMP postings
https://www.garlic.com/~lynn/subtopic.html#smp

Note this was different than a TSS/360 claim from the early 70s. TSS/360 on the 360/67 was supposedly the strategic product ... and cp/67 (virtual machines, precursor to lpars, etc) from the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

was an uninvited interloper ... at one point when there was 12 people working on cp/cms there were supposedly 1200 people working on tss/360.

In any case, on a uniprocessor 360/67, tss/360 was getting worse performnce supporting 4 interactive users doing approx. the same mixed workload (edit, complie, exec) as cp/cms was doing supporting 35 users. Part of the problem was that on a 1mbyte 360/67 ... the tss/360 fixed kernel requirements left very little for application paging.

In any case, tss/360 did a benchmark on a 2mbyte, 2processor 360/67 that showed 3.8 times the thruput of tss/360 on a 1mbyte 1 processor 360/67. The result was a big claim that tss/360 had fantastic multiprocessor support ... that could make two processors run four times faster. The actual issue was that on a 2mbyte configuration, tss/360 almost had enuf room (left over after fixed kernel requirements) for executing application programs.

now along comes 3081 ... which is suppose to never have a uniprocessor version ... only the two processor 3081 (and a pair of 3081s for a four-processor 3084). the 3081 had the typical slowed down processor running at 90 percent to allow for the cross-cache chatter. However TPF (airline control program, acp, etc), didn't have multiprocessor support at the time ... and many TPF systems were already running on the largest uniprocessors available (although in cluster mode ... something like the big, massive HONE complexes ... which were some of the biggest single-system-images at the time). TPF customers couldn't really utilize 3081s ... they either ran with the 2nd cpu idle ... or they ran under VM/370 ... with effectively two copies of TPF in virtual machines ... one for each real processor. Although it wasn't planned, IBM finally came out with a single-processor 3083 (primarily for the TPF crowd) ... with the 2nd processor removed and slow-down for cross-cache chatter removed ... so the processor ran at full-speed rather than 90percent.

there was some additional work done in the 3081 on kernel structures. prior to 3081 ... most kernels storage allication was done w/o regard to cache line boundaries. there was some analysis that if two different storage areas were allocated overlapping in the same cache line ... and two different processors were accessing the two different areas concurrently .... processor performance radically degraded. There was a big effort to re-organize kernel storage allocation so that it was cache alined and in multiples of cache lines (minimizing the cross-cache thrashing that was going on) This change is claimed to have improved overall customer thruput by five percent.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Performance and Capacity Planning

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Performance and Capacity Planning
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 03 Jun 2005 15:39:04 -0600
ibm-main@ibm-main.lst (ibm-main) writes:
These issues are not new - all the "joined" complexes (APs and MPs) showed the same problem. Attempts were even made to amerliorate this (in the field) via CPU affinity. Running on the "wrong" (i.e. attached) processor on an AP involved significant overhead. The effect can be best seen by comparing the MIP rating of a (partitionable) MP with it's component "halves".

Learn from history folks.


however, the 370 two-processor afinity was not because of non-uniform memory access ... afinity was oriented towards

1) cache hit consistency ... constantly swtiching from one processor to another, could play havoc with cache hit ratios ... memory latency introduced by cache misses is similar but different to variation in memory latency because of NUMA).

2) on two-processor, AP ... only one processor had connected channels, the other processor had no channels and no i/o capability and had to pass off i/o requests to the i/o capable processor. i/o intensive application could benefit with afinity to the processor with attached channels. In some cases, there could be similar situation in MP configuration, where there wasn't exactly symmetrical channel configurations on the two processors and one processor had unique I/O capability not found on the other.

recent posts
https://www.garlic.com/~lynn/2005j.html#12 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning

now there is some more to the 3081 tale started in an earlier post. as a temporary patch for TPF not having SMP support ... they reworked the VM multiprocessor support ... putting in some stuff tailored specifically for TPF under VM (that degraded the thruput for almost everybody else).

Normally, when TPF did an I/O operation, it would enter the VM kernel and do all the I/O emulation, ccwtrans, etc ... before returning to executing the TPF virtual machine. If the only workload was a single TPF virtual machine ... this left the other 3081 processor idle with little or nothing to do. So they reworked the VM kernel SMP support so that very shortly after entering the kernel ... it would do a SIGP (signal processor) to the other processor ... and then if it try and pass off the I/O emulation workload to kernel code running on the other processor ... while returning immediately to the TPF virtual machine (all falling with the architecture constraints defined for asynchronous operation of the SIOF instruction). This way you got some two processor concurrent operation ... one processor doing some VM kernel I/O emulation overlapped with the other processor running the TPF virtual machine.

The downside was that the majority of the customers that had normal workload mix and had already been keeping both 3081 processors busy anyway ... started seeing (a new) ten percent (elapsed time) of both processors being spent on the new SIGP signalling and the SIGP interrupt handling and processing (just about the sole purpose was to provide some processing overlap specifically for customers with only a single TPF virtual machine as the primary workload).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Repository for digital certificates

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Repository for digital certificates
Newsgroups: comp.security.misc
Date: Fri, 03 Jun 2005 15:17:25 -0600
smveloso writes:
First of all, I apologize if this is not the right forum to post this question...

I need to create a repository to allow users (and programs) access to digital certificates. I thought about an LDAP server, a database with a web front-end, etc... but is there a *standard* way to do this ?


supposedly x.500 dap and x.509 identity digital certificates went hand-in-hand. first time i remember paying much attention when the subject was brought up at an acm sigmod conference and somebody explained it as a bunch of networking engineers attempting to recreate 1960s database technology.

one of the things that started to dawn about these x.500/x.509 design points were that horendous amounts of identity related information in x.500 & x.509 raised significant privacy concerns.

later ... "lightweight" dap (ldap) came along.

there is this funny cross-over between trusted repositories of certificates and trusted repositories of certificatelss public keys ... making the original design point of x.509 certificates, redundant and superfluous.

from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

in the RFCs listed by click on Term (term->RFC#)

and select "LDAP" from the Acronym fastpath
lightweight directory access protocol (LDAP ) (LDAPv2) (LDAPv3 )
see also ITU directory service protocol , directory
3928 3909 3876 3866 3829 3771 3727 3712 3703 3698 3687 3674 3673 3672 3671 3663 3494 3384 3383 3377 3352 3296 3112 3088 3062 3060 3045 2927 2926 2891 2849 2830 2829 2820 2798 2739 2714 2713 2696 2657 2649 2596 2589 2587 2559 2307 2256 2255 2254 2253 2252 2251 2247 2164 1960 1959 1823 1798 1778 1777 1558 1487 1249


selecting on any of the RFC numbers brings up the RFC summary. selecting on the ".txt=nnnn" field retrieves the actual RFC.

another kind of certificate-less approach
https://www.garlic.com/~lynn/subpubkey.html#certless

for public key authenticatin ... is to register public keys in a RADIUS infrastructure ... in lieu of shared-secrets, passwords, etc and perform digital signature verification with the onfile public key. RADIUS supports lots of authentication clients accessing a RADIUS trusted repository for both authentication as well as authorization information (w/o necessarily exposing sensitive information to wide-open population).

again from my RFC index ... in the Term (term->RFC#) page, select "RADIUS" from the Acronym fastpath.
remote authentication dial in user service (RADIUS )
see also authentication , network access server , network services
4014 3580 3579 3576 3575 3162 2882 2869 2868 2867 2866 2865 2809 2621 2620 2619 2618 2548 2139 2138 2059 2058


note that RADIUS was originally developed by Livingston for their line of dail-up modem pool products. It has since become an IETF standard and expanded to meet much more generalized authentication and authorization requirements.

and (also from the Term (term->RFC#) page, standards work on generalized authentication, authorization and accounting
Authentication, Authorization and Accounting see also accounting , authentication , authorization 3588 3539 3127 2989 2977 2906 2905 2904 2903

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Performance and Capacity Planning

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Performance and Capacity Planning
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 03 Jun 2005 18:17:57 -0600
edjaffe@ibm-main.lst (Edward E. Jaffe) writes:
From
http://bama.ua.edu/cgi-bin/wa?A2=ind0506&L=ibm-main&P=R11281

"Customers generally prefer software that can help them ensure the terms and conditions of their license agreements are being met. Such software helps customers avoid liability and breach of contract issues. For capacity-based licenses, hardware reporting of serial numbers, LPAR names, and capacity (in MSU) makes that possible!"


then for a little more drift ... some recent posts on (some) history of software licensing/pricing
https://www.garlic.com/~lynn/2005g.html#51
https://www.garlic.com/~lynn/2005g.html#53
https://www.garlic.com/~lynn/2005g.html#54
https://www.garlic.com/~lynn/2005g.html#55
https://www.garlic.com/~lynn/2005g.html#57

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinky lights  WAS: The SR-71 Blackbird was designed ENTIRELY with slide rules. (fwd)
Newsgroups: alt.folklore.computers
Date: Sat, 04 Jun 2005 10:48:20 -0600
Charles Shannon Hendrix writes:
The same thing applies to things like LDAP: *lightweight* directory access protocol.

LDAP is a heavy protocol despite the name.

However, if you look at the X.500 stuff where it came from, then it does appear to be light after all.


recent post in different n.g. on the subject
https://www.garlic.com/~lynn/2005j.html#18 Repository for digital signatures

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinky lights  WAS: The SR-71 Blackbird was designed ENTIRELY  with slide rules. (fwd)
Newsgroups: alt.folklore.computers
Date: Sat, 04 Jun 2005 11:05:05 -0600
Anne & Lynn Wheeler writes:
recent post in different n.g. on the subject
https://www.garlic.com/~lynn/2004j.html#18 Repository for digital signatures


finger slip
https://www.garlic.com/~lynn/2005j.html#18 Repository for digital signatures

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: Sat, 04 Jun 2005 11:00:37 -0600
"John E. Hadstate" writes:
I disagree, David. It is pretty much that simple. It's just common sense. If C were a "terribly flawed" language, who would use it? In one way or another, it would be a too expensive luxury, and people would have long ago moved on to less flawed languages.

For some purposes, there are better languages, better in the sense that they restrict the range of implementation options and thus reduce the number of instances of results that are implementation-dependent. The leading candidate in this category is Java. But people who program in Java do so because they like the portability (lack of variation among implementations) and they have no need to do the things that Java and the JVM prevent them from doing.


there are some things that can require very little upfront effort and ease of use ... that might not be applicable for industrial strength operations .... in another domain ... say like using vw beetle for heavy freight hauling. something can have enormous short comings ... and be terribly flawed for industrial operations ... and still become widespread, if the up front effort to use it is very low

a little thread drift related to industry strength data processing thread in c.d.t
https://www.garlic.com/~lynn/2005i.html#42
https://www.garlic.com/~lynn/2005i.html#43

it doesn't mean that it is impossible to use a beetle for heavy freight hauling ... in fact ... one could imagine all sorts of folklore growing up about the tricks, optimization and procedures for heavy freight hauling with beetles.

however, the ease of entry level use can be more than offset by extensive and time-consuming manual compensation required when applying the solution to industrial operations.

lots of past postings on buffer overflow vulnerabilities and exploits
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: Sat, 04 Jun 2005 11:03:31 -0600
"Del Cecchi" writes:
For the same reason we put airbags in cars and blade guards on table saws and GFIs on outlets: so that a mistake doesn't have such disasterous consequences. If you don't understand that there is no help for you or anything you touch.

the seatbelt (and similar) analogy has been repeatedly used in numerous of these threads ... large collection from the past
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: Sat, 04 Jun 2005 11:33:46 -0600
... this actually might be a case of real programmers ...

past posting of real programmers don't eat quiche in this n.g.
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems

i got complaints because of the mountain boot issue ... when research was still in bldg. 28 and before 85 was built, i would walk to work along cottle .. where 85 currently is had no sidewalks ... and during the rainy season the treads of my boots would pick up quite a bit of mud ... which during the course of the day would be deposited in the halls of bldg. 28.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 04 Jun 2005 13:42:51 -0600
bob.shannon@ibm-main.lst (Bob Shannon) writes:
> I have been told that most of the folks at the Cambridge Scientific
> Center who were asked, post-decree, to transfer to
> Yorktown Heights ended up at Route 128 minicomputer
> companies.

Some years ago Melinda Varian published a paper on the History of VM. It chronicles the activities at the Cambridge Scientific Center. Fascinating story.

https://www.leeandmelindavarian.com/Melinda/25paper.pdf

Bob Shannon


the cp67 group was split off from the science center and morphed into vm370 group and took over the boston programming center on the 3rd floor (absorbing most of the people). they outgrew the 3rd floor and moved out to the old sbc bldg. in burlington mall.

in one of the attacks that there were going for there to be no more vm370 product (and that to meet the 370-xa schedule, mvs/xa needed virtual machine support), the decision was made to close down the burlingon mall group (and vm370 product) and move all the people to pok to work on the internal only "vmtool" (would never be released as a product and only be used for internal mvs/xa development).

there was lots of protests from various corners ... and it was somewhat decided that endicott could have a few of the people and continue along a little with a vm370 product (the rest were still needed in pok to support mvs/xa development) .

lots of past posts about science center
https://www.garlic.com/~lynn/subtopic.html#545tech

there is some joke that the person that made the decision to close burlington mall and kill vm370 was a major contributor to vms (in terms of number of people).

somebody had leaked the decision about closing burlington and killing the vm370 product to the people at burlington mall a couple months before it was announced. there was a big investigation for the source of that leak (sort of a mini deep throat).

there was also some jokes about the person who was given the task of actually closing burlington location ... not too long earlier they had the job of closing down the programming language group (in the time bldg in manhatten).

later the science center moved out of 545 tech sq. down a couple blocks to 101 main street. before that happened, some number of us had transferred to the west coast

later, my wife and I had started ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and began subcontracting some of the work to three former people from the science center who had formed a small software company. this grew in size and scope ... so that when the science center was actually shutdown ... they were able to move in and take over the science center qtrs at 101 main st. at this point you are into the OSF time-frame ... so you find some number of the science center people show up in OSF organization (as well as the ha/cmp subcontractor hiring some). quite some number now appear to be working at state street.

other stuff from the science center was the invention of gml (& sgml) ... the precursor to current xml
https://www.garlic.com/~lynn/submain.html#sgml

and the internal network ... which was larger than the arpanet/internet from just about the start until sometime mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

and the technology basis used in bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

the science center also did the port of apl\360 to cms\apl and rewrote various bits and pieces to adapt it to virtual memory environment. this became the original basis for HONE ... original cp67/cms and a lot of cms\apl applications ... which evolved into world-wide platform supporting all field, sales, and marketing (eventually mainframe order contracts couldn't even being generated w/o being run on HONE)
https://www.garlic.com/~lynn/subtopic.html#hone

also out of the sciecne center was a lot of the performance tuning and management technology morphing into capacity planning
https://www.garlic.com/~lynn/submain.html#bench

and of course compare&swap was invented by charlie at the science center (the choice of compare&swap mnemonic are because "CAS" are charlie's initials:
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 04 Jun 2005 14:18:05 -0600
j.grout@ibm-main.lst (John R. Grout) writes:
Until a consent decree made them keep their distance, IBM did lots of university work, including discounts, donations and collaborative research. I have been told that most of the folks at the Cambridge Scientific Center who were asked, post-decree, to transfer to Yorktown Heights ended up at Route 128 minicomputer companies.

ref:
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd

note in the early 80s, ibm had formed "ACIS" ... which was told to hire a couple people to give away $XXXm to universities.

out of that ... IBM (equally with DEC) funded project athena to the tune of $25m (matched by $25m from dec). out of that you got x-windows, kerberos, and some number of other things.

cmu got $50m ... out of that came mach, camelot, andrew filesystem, andrew widgets, etc. mach was used by convex for exampler numa machine ... recent mention:
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning

mach was also used for NeXT machine and is the basis for current apple operating system.

acis group also started doing a port of bsd to 370. along the way that was retargeted to pc/rt and named "AOS". this caused a little heartburn for the AIX group in austin ... since they had claimed that something like 120 people were needed to program the VRM in PL.8 as an abstract machine layer for speeding up and simplifying the porting (AT&T) unix to the pc/rt (acis group doing the bsd port to bare hardware faster and less resources).

The acis group also redid JFS filesystem so that it didn't use the special 6000 hardware. The austin AIX group claimed the special 6000 hardware was required to make JFS perform ... ACIS showed JFS w/o the special hardware outperforming JFS using the special hardware.

ACIS also did the work that took UCLAs locus and produced AIX/370 and AIX/PS2.

a few other past posts mentioning acis:
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/2001f.html#1 Anybody remember the wonderful PC/IX operating system?
https://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003h.html#53 Question about Unix "heritage"
https://www.garlic.com/~lynn/2004d.html#57 If you're going to bullshit, eschew moderation
https://www.garlic.com/~lynn/2004n.html#19 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004p.html#53 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#61 will there every be another commerically signficant new ISA?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: Sat, 04 Jun 2005 14:40:40 -0600
"Del Cecchi" writes:
I was replying to the assertion that C "is not that bad", and it is a long way from "terribly flawed" when in fact if it was a power tool, the manufacturer would long ago have been bankrupted by the product liability lawsuits. In fact I might go so far as to like C to Asbestos. It was really great and useful and widely used for many years. It wasn't until much later that the problems were realized.

there is also some similarity to the argument about the new (at the time) safety requirements for farm machinary .... everybody knew that the only poeple who stuck body parts into exposed machinary and got themselves killed were total dullards (besides a little darwinism is supposedly good for the species). futhermore, the extra cost for all those extra safety doodads were going to put farmers out of business.

... and any subsequent statistically "evidence" about resulting decrease in death rates wasn't actually proof. i managed to make it to majority after having started operating various kinds of farm & other heavy equipment around eight (when i could reach the starter peddle on the floor ... some early electic starter motors ... before they were connected to the key switch).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

NASA Discovers Space Spies From the 60's

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: NASA Discovers Space Spies From the 60's
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Sat, 04 Jun 2005 15:32:44 -0600
NASA Discovers Space Spies From the 60's
http://science.slashdot.org/science/05/06/04/1721241.shtml?tid=160&tid=103&tid=1

the project was never explained to me that way ... but what do i know (and besides, i was only an undergraduate at the time)

360/67 was announced with up to four-way smp. for the most part only two-way smp processors were built as part of the standard product line.

the standard 360/65 two-way processors were basically two standard machines joined together with uniform addressing of all combined memory (but each processor was otherwise mostly a uniprocessor with private i/o channels ... multiprocessor i/o was simulated by having control units with multi-channel connectivity ... one to a channel connected to each processor).

the 360/67 multiprocessor was a different beast. it had independent paths to memory for processors and the "channel controller". the "channel controller" was a specialized box that could be configured to put the configuration into multiprocessor configuration or split into individual single processors ... with the switches set to what components were offline/online and/or what resources were partitioned to which configuration. a 360/67 multiprocessor not only could have single, uniform addressing for all memory ... but also single uniform addressing for all available channels in the configuration (all this fancy stuff was dropped in the 370 generation and some of it didn't reappear until 370/xa and 3081).

the standard 360/67 "control registers" allowed "sensing" of the channel controller switches (i.e. storing specific control registers gave you the switch settings on the channel controller).

Lockheed had part of the "MOL" contract ... and for them, a special 360/67 triplex was built. Among other special features, not only could the channel controller switch settings be "sensed" (by storing the appropriate control registers) ... but it was also possible for software reconfiguration by loading different values into the appropriate control registers.

in current architecture ... most of the control registers defined in 360/67 for channel controller operation ... have now been given over to defining multiple virtual address spaces ... in support of all the fancy cross virtual address space operation stuff.

a little mention of 360/67 control register use:
https://www.garlic.com/~lynn/2001.html#69 what is interrupt mask register?

scan of 360/67 functional characteristics
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf

control registers are on page 16 ... 8-14 are the "Partitioning Sensing Registers" (although it says to reference 2167 manual for details)

misc. other past posting mentioning 360/67 "blue card"
https://www.garlic.com/~lynn/99.html#11 Old Computers
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001.html#71 what is interrupt mask register?
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2003l.html#25 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004l.html#65 computer industry scenairo before the invention of the PC?
https://www.garlic.com/~lynn/2005f.html#41 Moving assembler programs above the line

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 05 Jun 2005 09:45:25 -0600
p_hanrahan@ibm-main.lst (Paul Hanrahan) writes:
The academic community made huge contributions to VM. However, the VM developers did their share also.

Many of the IBM developers who were laid off in the early to mid 90's when IBM took a nose dive were excellent programmers.

IBM management was very wise in encouraging the user community to develop a sense of ownership in products like VM.


there was the joke that (at one time) a significant percentage of ibm products originated at datacenters (both customer and internal) which were then handed over to "development" groups to maintain and support as products.

at various times during the late 70s and the 80s, there were studies of customer enhancements to VM (not in the product) compared to internal corporate enhancements to VM (also not in the product). the summary was that the KLOCs of internally distributed vm enhancements were about the same size as the KLOCs of customer distributed enhancements (on things like share tape).

over the years there were sometimes easier avenues for internally developed code to leak into the product (as opposed to customer developed enhancements leaking into the product). part of the context was that cp67 (and then vm) shipped as source and source maintained. it wasn't until the OCO (object code only) issues starting in the very late 70s that started to turn that around

the internal network was mostly VM-based machines and VM-based networking ... with the batch operating system machines (& their networking paradigm) mostly restricted to edge nodes. some mention of '83 and internal network hitting 1000th node:
https://www.garlic.com/~lynn/internet.htm#22

other internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

besides the huge proliferation of vm use inside the company ... there was also the custom tailored HONE system that provided worldwide support for field, sales, and marketing people (it quickly came to pass that mainframe orders couldn't even be submitted w/o having passed thru HONE). HONE was a large vm-based time-sharing service supporting sales and marketing world wide.
https://www.garlic.com/~lynn/subtopic.html#hone

for over a decade, i had a hobby providing custom tailed operating system and other stuff for HONE. i also assisted in the proliferation around the world. when EMEA hqtrs moved from the states to paris ... I handled much of the HONE cloning for the new hdqtrs in the new bldg. in La Defense. I got to work on the initial hone installation in Japan.

in the 80s, some new, incoming HONE executive ran across some reference to me ... and raised the issues about the worldwide HONE infrastructure having dependency on single person's hobby, who wasn't part of the organization and there were no MOUs between the different responsible organizational executives (between HONE and whoever my current chain of command might be at the time ... who possibly didn't even know what HONE was and/or that it existed).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 05 Jun 2005 09:21:32 -0600
ibm-main.04@ibm-main.lst (Leonard Woren) writes:
Back when I was working at the University of Southern California, which had institutionalized animosity towards mainframes and IBM, I once got a call from an MVS systems programmer working in the real world who was interested in one of my freebie programs. He had graduated from USC with a degree in Computer Science and until he came across my program, he didn't even know that USC had an IBM mainframe! In USC Engineering, it was taboo to even mention it. From stories that I heard, I believe this anti-IBM, anti-mainframe attitude was largely due to the high costs of IBM systems and lack of meaningful educational discounts. As already mentioned here, IBM was giving 15% HESC discounts (but only for VM software, none for MVS software!), while Sun was giving 85% discounts. And I don't think that IBM discounted the hardware at all.

So Aker's fixation on current quarter results came home to roost. (Before that, IBM, like Japanese companies, understood that sometimes you had forgo a little profit now to generate more later.)


back in the 60s ... there were regular 60 percent educational discounts ... this was before the fed. gov. action and the unbundling announcement of 6/23/69 .... which also started the process of eliminating free software and moving to charge/license for software.

recent past posts about a view of transition from free software to pricing/charging for software.
https://www.garlic.com/~lynn/2005g.html#51
https://www.garlic.com/~lynn/2005g.html#53
https://www.garlic.com/~lynn/2005g.html#54
https://www.garlic.com/~lynn/2005g.html#55
https://www.garlic.com/~lynn/2005g.html#57

the big acis money give-away to universities in the 80s ... never really involved mainframes ... mostly lots of money (presumably tax deductable but possibly various fed. redtape may have precluded various kinds of restrictions on what the money could be spent for ... pure speculation ... I have no facts or information).

previous acis post in this thread
https://www.garlic.com/~lynn/2005j.html#26

hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

was restricted from directly bidding on nsfnet1 rfp (i.e. high speed backbone ... and also later nsfnet2 rfp upgrade) ... although we did get a nsf technical audit of what we had running which made some claim that what we had running was at least five years ahead of all RFP responses to build something new
https://www.garlic.com/~lynn/internet.htm#0

... there was other corporate involvement in pouring lots of resources into nsfnet backbone (way over and above what nsf was actually paying out on the rfp) ... again not particularly mainframe related. a few posts related to nsfnet rfp (aka backbone that tied together the regional networks, setting the stage for an "operational" internet ... in contrast to internet "technology"):
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2000e.html#10

lots of other internet related posts
https://www.garlic.com/~lynn/subnetwork.html#internet

in the 80s ... ibm also a pumped a lot of resources into bitnet (in the us) and earn (in europe)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

old post specifically referring to the formation of earn
https://www.garlic.com/~lynn/2001h.html#65

bitnet/earn was oriented to using the technology used in the internal network ... the internal network was larger than arpanet/bitnet from just about the beginning until sometime mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Three-Address RISC, and Really Fast Addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Three-Address RISC, and Really Fast Addressing
Newsgroups: comp.arch
Date: Sun, 05 Jun 2005 08:51:17 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
It seemed to me that load-store RISC architectures were hugely inefficient, requiring three instructions to do what a CISC machine can do in one.

But RISC instructions do naturally map to what processors can dispatch to pipelines.


i've characterized that 801/RISC was in some sense a re-action to the failure of FS
https://www.garlic.com/~lynn/submain.html#futuresys

and its aggresively complex hardware design.

there were repeatedly statements about 801/risc in mid to late 70s ... about trading off software complexity for hardware simplicity (aka moving stuff from hardware to software) ... effectively a one-to-one relationship between an instruction and simple hardware component ... which aided the compiler in laying out the optimization. (this was also in the days of chips with relatively minimum circuit counts ... so single chip cores couldn't be doing a lot anyway)/

besides software complexity being moved into pl.8 compiler ... the chip also didn't have any security domain protection features; the compiler would only generate secure code ... and the cp.r operating system would only load and execute valid code.

later for romp chip ... some things had to be retrofitted to the chip when it was decided to retarget it to the workstation market with a relatively vanilla unix operating system with conventional security domain requirements (separation of kernel and application).

misc. past 801, risc, romp, rios posts
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 05 Jun 2005 10:59:44 -0600
ibm-main.04@ibm-main.lst (Leonard Woren) writes:
So Aker's fixation on current quarter results came home to roost. (Before that, IBM, like Japanese companies, understood that sometimes you had forgo a little profit now to generate more later.)

that really isn't fair to akers. during the 80s, ibm was pouring huge amounts of money into universities and places like nsf projects ... with substantial productivity and long term benefits ... the benefits just weren't mainframe and/or necessarily ibm specific.

misc. recent postings
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#26 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#29 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd

in the mid-80s there were predictions that the business was going to double its world-wide revenue from $60b/annum to $120b/annum and the corporation embarked on massive new manufacturing facility building program (in anticipation of doubling mainframe hardware sales). i possibly made a career limiting move (at that time) by doing a economic analysis of the commoditizing of computer hardware and suggesting that the company was (instead) headed into the red.

it was similar ... but different to the analysis from 1970-era that supposedly motivated FS
https://www.garlic.com/~lynn/submain.html#futuresys

attempted to do such an aggresive new technology and integration that the controller clones wouldn't be able to keep up ... a couple specific references on the subject (references to huge ibm R&D overhead and not being competitive with the clone makers):
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003p.html#25 Mainframe Training

having worked on one of the original clone controller projects as an undergraduate ... which subsequently was blamed for the clone/pcm controller business market (and subsequently what motivated FS effort):
https://www.garlic.com/~lynn/submain.html#360pcm

it probably also wasn't very political during FS to characterize the project as being a case of the inmates in charge of the institution ... somewhat referring to a long running cult film down in central sq.

from 88 into 92 or so ... when we were pitching 3-tier architecture that we had invented (while many were pushing SAA and that real client/server really wasn't needed ... just a thin, gui presentation capability):
https://www.garlic.com/~lynn/subnetwork.html#3tier
and were doing the ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp
as well as keeping our hand in networking and the hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

we would periodically drop in on staff and executives in the new somer's "pyramid power" building and talk about the unfolding commoditizing of computer hardware and the effects it would have on the computer business. i don't think it was a problem that they didn't understand the issues ... it was possibly that so many had their lifelong experiences and careers tied to mainframe business segment ... that they couldn't see a way to (personally) adapt. i suspected that many were (secretly?) hoping to retire before having to directly confront the issues and then it would be somebody else's problem.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 05 Jun 2005 16:12:53 -0600
Efinnell15 writes:
Early to mid 90's things really started going south. When BOCA folded lots of them were reassigned to Back-office at Redmond and I guess are still there. Then they decided to terminate HESC and charge us commercial rates for everything while Sun and others were discounting 85%. Then they took away PSR's and SE's and farmed out the Marketing to ISV's competing against themselves and spreading lots of FUD. As Windows matured and got colossal penetration, the CS departments started teaching html and WEBdesign with JAVA and C++ and Frontpage as follow ups. No COBOL, ASM, Fortran, PL/1, JCL, etc to be seen.

SAA in the late 80s and early 90s was still trying to put the client/server genie back in the bottle
https://www.garlic.com/~lynn/subnetwork.html#3tier

along with pushing T/R configurations with 300 stations ... which help support the concept of just sufficient aggregate bandwidth for doing GUI to relatively thin clients.

this was also helping protect the established terminal emulation paradigm and installed market.

in the mid-80s one of the senior people from the disk division gave a presentation at an an annual, internal world-wide communication division conference ... where he made the claim that the head of the communication division was going to be responsible for the death of the disk division.

the scenario was that terminal emulation provided an effective initial market penetration stategy for the original PCs ... but as PCs sophistication and PC applications matured ... the terminal emulation paradigm was becoming a boat anchor. lots of environments with evolving distributed applications were finding the terminal emulation spiggot way too restrictive and were starting to replicate data out in the distributed environment. this created a big explosion in the demand for disk drives out in the distributed environment and at the same time contributed to a flatning of glass house disk (and eventually application and other hardware) growth

the disk division created a number of products that would drastically improve the ability for distributed processing to access data in the glass house ... and nearly all of them were vetoed by the communication division .... the disk division having strategic responsibility for data within the glass house ... but the communication division had strategic responsibility for connectivity in excess of some distance, effectively anything leaving/entering the glass house.

collection of past postings on the terminal emulation subject
https://www.garlic.com/~lynn/subnetwork.html#emulation

besides having proprietary protocols and terminal emulation install base to protect ... a contributing issue was some number of large establishments believed the official decrees by numerous world govs. that tcp/ip would be eliminated and replaced by osi.

we were inolved in trying to get HSP (high speed protocol) as a work iterm in x3s3.3 (iso chartered ansi organization responsible for protocols relating to osi layer 3 & 4). ISO (& ISO chartered organizations) had directive that standardization work couldn't be done on stuff that violated the OSI model. HSP violated and was turned down because of (at least):

1) hsp would go directly from transport interface to the lan mac interface. the lan mac interface corresponded to somewhere in the middle of layer 3/networking. hsp bypassed the layer3/layer4 interface and therefor violated osi.

2) hsp would support internetworking (aka IP). internetworking doesn't exist at all in the OSI model. supporting internetworking violates the OSI model.

3) hsp would go directly from the transport interface to the lan mac interface. the lan mac interface corresponds to approx. something in the middle of osi layer 3/networking. lan mac interfaces violate OSI model, therefor hsp supporting lan/mac interface also violated OSI model.

misc collected postings on hsp, osi, etc
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 06 Jun 2005 07:38:55 -0600
Efinnell15 writes:
Her Master Plumber Webpage(/~melinda) is instructional in many aspects of 360/370 and 390 architecture. Hmm, where's 380?

370-xa ... introduced 31-bit virtual and uniform shared channel addressing for multiprocessing.

note the 360/67 had both 24-bit and 32-bit virtual memory options ... and single uniform channel addressing capability for multiprocessoring .

other 360 multiprocessing (and later 370) had a shared single uniform addressing for memory ... but io/channels were processor specific. shared i/o configuration was emulated by having multiple channel attachments on control units (with different processor specific channels going to shared control units).

it wasn't until you got to 370-xa ... that larger virtual memory addressing re-appeared and all processors being able to uniformly address all channels.

370-xa internally was commoningly referred to as 811 ... for 11/78 date on the various architecture/design documents. each copy was individually numbered and had to be kept under double lock & key. they were registered and periodically site security would come by and "audit" all such documents.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Plugs Big Iron to the College Crowd

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Plugs Big Iron to the College Crowd
Newsgroups: bit.listserv.ibm-main
Date: Mon, 06 Jun 2005 09:26:31 -0600
Efinnell15 writes:
Her Master Plumber Webpage(/~melinda) is instructional in many aspects of 360/370 and 390 architecture. Hmm, where's 380?

aka 370xa (or 811); part of this may have been ambivalence about signaling any generation change. remember that 1/3rd to 1/2 of the 70s had been taken up by FS ... which was killed before it was even announced. FS was going to have been a bigger departure from 360/370 ... than the 360 departure had been from prior generations.
https://www.garlic.com/~lynn/submain.html#futuresys

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A second look at memory access alignment

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A second look at memory access alignment
Newsgroups: comp.arch
Date: Mon, 06 Jun 2005 11:13:42 -0600
glen herrmannsfeldt writes:
There are some interesting cases in S/370 related to page boundaries. For most instructions, it is easy to see in advance which pages are needed and to interrupt if they aren't available. TR (translate) is one of the harder cases. TR uses a 256 byte translation table, and looks up each operand byte in the table and replaces it with the value in the table. As defined for S/360, it isn't allowed to fail for parts of the translation table that aren't actually used. If the table crosses a page boundary a trial execution is needed to determine which page will actually be referenced, before any data is modified.

on 360/67 virtual memory hardware ... worst case was 8 page references ... execute instruction that crossed page boundary (2), target instruction (of the execute) that crosse page boundary (2), SS instruction where both storage operands crossed page boundaryh (4). if fetch/store protect feature was installed on standard 360 ... both starting and ending storage operand locations would be checked for permission before starting the instruction. in the 360/67 case, in addition to checking starting and ending storage operand permissions, it would check starting and ending storage operand for available page (before starting instruction). for 360 maximum storage operand size was 256 bytes ... standard 360 fetch/store protection feature was on 2k boundaries ... so storage operand couldn't overlap more than two such 2k blocks; so only start and end checking was necessary. Similarly, 360/67 used 4k virtual pages and 256 byte storage operand could also only overlap maximum of two virtual pages (requiring only start and end address of storage operand to be checked).

370 introduced a couple "long" instructions ... which were interruptable and nominally defined to operate a byte at a time (with the corresponding registers specified for address and length being updated as the instruction progressed). "long" instructions were not to precheck starting and ending storage operand addresses (as was the case for 360 and all other 370 instructions) ... but check each updated storage operand address as the instruction execution progressed.

there was an early microcode bug in 370 models 115/125 ... where long instructions were prechecking ending storage operand address prior to starting instruction execution (contrary to architecture definition). this caused problems for some coding sequence which was using fault interrupt from a long instruction to determine available accessible memory. Example was clearing available memory to zero ... and interrupt with pointer to last allowable memory location. with 115/125 bug, it would interrupt before the instruction even started (and so register pointer/length would not have been changed and appear to indicate no allowable memory).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A second look at memory access alignment

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A second look at memory access alignment
Newsgroups: comp.arch
Date: Mon, 06 Jun 2005 12:14:52 -0600
glen herrmannsfeldt writes:
What did the 360/67 do with TR? Pages not referenced by actual translation table entries are not required to exist. S/370 carefully defines this case, but I don't know that it was done for the 360/67.

the "table" storage operand was treated as a single 256 byte operand ... and prechecked the starting and ending address. a page fault interrupt could occur for either the starting or ending address. standard 360 fetch/store protect optional feature worked similarly ... the start & ending address of each storage operand would be prechecked (and could result in store or fetch protection interrupt ... only fetch protection in the case of the TR "table" operand).

note that two successive page faults could happen in the case of the TR table storage operand (or for that fact, any storage operand that crossed a page boundary) ... the first page fault on the starting address ... and then after the instruction was restarted, a second on the ending address.

there was theoritical possibility that starting page could be stolen during the fetch of the ending page ... resulting in additional page faults.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtual 360/67 support in cp67

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: virtual 360/67 support in cp67
Newsgroups: alt.folklore.computers
Date: Mon, 06 Jun 2005 12:42:36 -0600
the early cp67 releases just provided support for "regular" 360 virtual machines .... i.e. virtual machines with standard 360 real memory ... and did not support virtual machines with address relocation hardware architecture found in 360/67 (aka you could run cp67 on a real 360/67, but running cp67 in a virtual machine wasn't supported).

in recent post about networking, bitnet, & earn
https://www.garlic.com/~lynn/2005j.html#30 IBM plugs Big Iron to the College Crowd

there was a reference to an old posting that included email about the formation of earn
https://www.garlic.com/~lynn/2001h.html#65

the author of the subject email had many years earlier been on assignment to the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and was responsible for adding support in cp67 for virtual machines with address relocation hardware architecture. basically the 360/67 control registers had to be simulated ... recent reference:
https://www.garlic.com/~lynn/2005j.html#28 NASA Discovers Space Spies From the 60's

as well as "virtual" segment and page tables. Virtual address space segment and page tables in the memory of the virtual machine would have address translation from a virtual machine's virtual address space (3rd level addresses) to some address within the memory of the virtual machine (which was also virtual, aka 2nd level addresses).

Considering the real memory as "1st level addresses", and the virtual machine addresses as "2nd level addresses", then a virtual machines, virtual address space tables specified "3rd level addresses". Simulating virtual machine address space relocation hardware required building shadow segment and page tables (for the 3rd level addresses). The virtual address space tables in the virtual machine translated from 3rd level addresses to 2nd level addresses. The shadow segment and page tables, translated from 3rd level addresses directly to (real machine) 1st level addresses. whenever the virtual machine thought it was switching to its virtual address space tables (in the memory of the virtual machine), the cp67 simulation would actually switch to the shadow tables (that gave the 3rd level to 1st level address translation).

misc. past posts mentioning virtual address space simulation with shadow tables
https://www.garlic.com/~lynn/94.html#48 Rethinking Virtual Memory
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004l.html#67 Lock-free algorithms
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#70 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005h.html#11 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A second look at memory access alignment

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A second look at memory access alignment
Newsgroups: comp.arch
Date: Mon, 06 Jun 2005 18:05:45 -0600
glen herrmannsfeldt writes:

http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/dz9zr003/7.5.141

For TR it isn't supposed to access table entries not needed by the source string. For protection, that can be done during execution, as there is no guarantee against partial execution before a protection violation. I suppose the likelihood of a translate table overlapping the beginning or end of a region, such that one page does not exist, is low. This is described in Nick Tredennick's book describing the design of the Micro/370 processor.


that is in current "Z/architecture" PoP ... the original 360 PoP doesn't make that statement about TR ... I remember being told that for 360 ... all storage operands had their start&end addresses prechecked ... and that incremental checking didn't occur until later models.

the architecture redbook would have had the specifics ... the architecture readbook was done in cms script with conditional formating for whether you got the full architecture manual ... or just printed the subsections that resulted in the PoP.

it use to have a section on instruction termination, cancel, abrogation, etc.

The Z/architecture pop implies that you could have a partially executed TR instruction i.e. fetch protection fault after some number of bytes had been substituted/translated. This would be a problem for page fault since there is no updated current pointer & length field (as in the long instructions) for restarting the instruction ... if you took a page fault after partial execution. In 360 ... the start and end bounds were just checked before the instruction was even started and incremental byte-by-byte checking didn't occur in 360.

The quoted Z/architecture pop description implies that if the 256 byte table operate crosses a 4k boundary ... that the TR instruction will make a preliminary dry run of every individual byte in the (to be translated) source operand ... to see if a data pattern specific page fault occurs (avoiding the case of a page fault with a partially executed instruction that couldn't be correctly restarted).

I don't know when byte-by-byte TR instruction exution operand checking was introduced ... but (modulo fading memory) I'm reasonably sure that it wasn't on 360 or even 370. Neither the 360 nor the 370 PoP TR instruction writeup makes references to the implied Z/architecture case of interruption of a partially executed instruction (and/or implied dry run execution for table within 256 bytes of the end of a 4k page). The precursor (to Z/architecutre) ESA/390 PoP does mention it.

A frequent/common application for short table was half-byte hex to character hex in storage displays ... i.e. expand half-byte nibbles into adjacent bytes and translate hex nubmers x'0'-x'F' into characters 0-9,A-F (only needed 16byte table ... and didn't need the whole 256 byte table).

topic drift warning ....

all this has turned up something of possible interest.

segment protection (a bit in the segment table entry) and the ISTE and IPTE instructions were dropped from the 370 virtual memory architecture when the 165 engineers claimed that it would take an extra six months for them to re-engineer the virtual memory retrofit to 370/165s in the field (i.e. announced and shipped before 370 virtual memory). However, it was fully documented in the 370 redbook (only with numerous other unannounced bits & pieces) ... but not in the PoP for general availability (all 370s that had already implemented the dropped features were retrofitted to disable them so that there was a common architecture across all 370s.

this caused a severe hardship for CMS ... which had been planning on making use of segment protection for shared segments (lots of cms interactive virtual machines all utilizing common kernel code ... instead of requiring a unique copy for every virtual address space).

now check this out in the esa/390 PoP appendix section on differences between 370 and 370/xa
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/F.1.4?SHELF=EZ2HW125&DT=19970613131822
F.1.4 Page Protection

A page-protection bit is provided in the page-table entry. Page protection can be used in a manner similar to the System/370 segment protection, which is not included in 370-XA.


......

W/o the planned segment protection bit for 370 ... CMS had to go thru extremely painful contortions in order to provide shared segment protection across different virtual address spaces.

I just checked 370 PoP, GA22-7000-4, Sep75 (revsied 9/1/75, by TNL GN22-0498) (originally pulled off bitsavers.org), pg. 59 that gives format/definition of segment-table entries ... which specifies that the bits have to be zero (no segment protection defined).

I believe that "page protection" comment from the appendix is a slip-of-the-tongue since there was no such segment protection made available on 370.

this is reference to "segment protection" finally being made available in 1982 on 308x and 4381 (running in 370 mode). 370-xa was also introduced with 308x ... which means as soon as a customer upgraded their 308x from running in 370 mode to running in 370-xa mode ... they lost segment protect (replaced by page protect, so i guess that they can get away with the comment).
http://66.102.7.104/search?q=cache:YMNfiNYnhoYJ:www.research.ibm.com/journal/sj/234/ibmsj2304G.pdf+%2B%22segment+protect%22+%2Bibm&hl=en

it is also mentioned in melinda's vm history.
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

segment protect provided for permissions at the per address space level, in a per-address-space segment table ... with everybody sharing the same page table, same page table entries and same real pages (regardless of permissions). moving to page table protection, the permissions are in the shared table entries ... as opposed to the virtual address specific tables (aka everybody sharing the entries have common permissions). original 370 architecture tried to be careful by separating the per address space permissions from the actual shared entries.

misc. past postings discussing the problems and hardships created when segment protection feature was dropped from original 370:
https://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#9 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004p.html#14 vm/370 smp support and shared segment protection hack
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005c.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#61 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#46 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#9 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A second look at memory access alignment

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A second look at memory access alignment
Newsgroups: comp.arch
Date: Mon, 06 Jun 2005 19:13:39 -0600
Anne & Lynn Wheeler writes:
The quoted Z/architecture pop description implies that if the 256 byte table operate crosses a 4k boundary ... that the TR instruction will make a preliminary dry run of every individual byte in the (to be translated) source operand ... to see if a data pattern specific page fault occurs (avoiding the case of a page fault with a partially executed instruction that couldn't be correctly restarted).

an aside observation ... such an implementation should theoretically allow a translate table that crosses a page boundary ... and all bytes "to be translated" resolve to table entries in the 2nd page ... and no entries resolve to table entries in the 1st page.avoiding fetch & page faults for the 1st page.

while there are short tables involving translating byte values x'00' thru x'0f" ... i've also seen "short" tables involving translating byte values x'f0' thru x'ff" ... where the tr instruction is given a table origin address minus 240 bytes the starting address of the actual (short) table.

aka
TR source,table-240

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TSO replacement?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TSO replacement?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 07 Jun 2005 07:36:16 -0600
Jim.S.Horne@ibm-main.lst (Horne, Jim - James S) writes:
Rexx under TSO was available as of TSO v2. On VM,it was available by VM/SP release 3 - I don't think it was in release 2.

from melinda's
https://www.leeandmelindavarian.com/Melinda/25paper.pdf
By the time we celebrated VM/370's tenth birthday at SHARE 59 in New Orleans in August, 1982, IBM had declared VM strategic, and the number of licenses was growing wildly. Our installations had already begun to enjoy some fruits of IBM's new commitment to VM, such as XEDIT, the enhanced file system, and Pass-thru. Although IBM had been to SHARE to discuss the possibility of distributing only object code for its software, the general view was that it would soon realize how unwise that was. In 1982, the VM community had a lot to celebrate. Most of us believed that CMS was about to take over the world, so we gave it a wonderful birthday party.

And we soon got a very big reason to celebrate. Early in February, 1983, IBM announced VM/SP Release 3. After years of pleading, we would finally get REXX!

Mike Cowlishaw had made the decision to write a new CMS executor on March 20, 1979. two months later, he began circulating the first implementation of the new language, which was then called "REX". Once Mike made REX available over VNET, users spontaneously formed the REX Language Committee, which Mike consulted before making further enhancements to the language. He was deluged with feedback from REX users, to the extent of about 350 mail files a day. By consulting with the Committee to decide which of the suggestions should be implemented, he rather quickly created a monumentally successful piece of software.

VM customers fell in love with REXX the moment they got it. REXX quickly became the language of choice for CMS applications. In no time, we began seeing REXX applications consisting of hundreds of thousands of lines of code, and few of us wanted to work in any other language.

By the time REXX celebrated its tenth birthday in 1989, it had been designated the SAA Procedures Language and had long since begun spreading to other systems, including MVS. Today, it is available on essentially every significant platform and continues to delight more users


.....

some postings about large rex app done spring of 81 (I had then done a Share presentation on it ... one of the points in the early days of the OCO wars ... was that if it ever shipped, it would have to ship with source):
https://www.garlic.com/~lynn/submain.html#dumprx

some postings about coming up with 3-tier architecture and having to deal with saa folks
https://www.garlic.com/~lynn/subnetwork.html#3tier

lots of postings about internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: Tue, 07 Jun 2005 07:52:04 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
1) C was designed as a portable, high-level assembler, and its scope for user error is the same as assembler. It is several times faster to code, but not much faster to debug - as it was designed for use as a workbench, that was fine.

frequent exploits in the world involve c-based applications are buffer overflows
https://www.garlic.com/~lynn/subintegrity.html#overflow

a frequent "mistake" is that target buffer locations lack any infrastructure defined lengths ... and operations just run over the end of the buffer.

there are assembler based operations where the underlying infrastructure have default convention that buffer locations have encoded lengths and conventional libraries and system support routines utilize buffer length information whenever buffer operations are involved ... and the probability of buffer overrun mistakes is radically less than in typical c language environment.

it is possible, even in assembler coding ... to drastically reduce certain kinds of coding mistakes if the underlying infrastructure conventions include simple additional structure ... aka even assembler environments can have established coding conventions that contributed to significantly reduced coding errors.

some previous point was that C-language buffer length coding mistakes can be eliminated if the coder just memorizes all buffer lengths and faithfully applies that information to every available buffer. Even in assembler language environments where the buffers carry esplicit length information, the probability of buffer overrun mistakes can be drastically reduced ... because of reduced dependency on human memory based accounting operations.

i might even assert that the c-language related buffer length dependency on humans memorizing every buffer length ... and possibly having to rememorize them after not having worked on a program for a period of months ... is analogous to the security shared-secret conventions ... where people are assumed to correctly memorize scores of passwords ... even those that they use infrequently:
https://www.garlic.com/~lynn/subintegrity.html#secrets

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A second look at memory access alignment

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A second look at memory access alignment
Newsgroups: comp.arch
Date: Tue, 07 Jun 2005 10:01:26 -0600
since TR was never originally designed to be an incremental execution, interruptable, and restartable instruction (like the "long" instructions introduced with 370) ... presumably TR has to pre-resolve all possible real addresses before starting (when running with DAT on).

the standard 360/370 scenario was doing start/end bounds checking would also resolve real address (when running with DAT on) ... so the real addresses were all available during the actual instruction execution w/o having to re-interrogate the TLB.

If another CPU had done global tlb entry invalidate (PTLB was introduced in 370 to do this globally for all entries in all TLBs, originally there was also selective invalidate ISTE & IPTE, but they were dropped before 370 shipped)... unless whole processor complex interlocked until in-flight instructions completed, system software would have to assume that there could be some dangling use of the associated real page addresses from instructions in-flight. this would seem to be at preferrable to having a non-interruptable, non-restartable instruction like TR ... re-interrogate the TLB for a real address (while in progress) and find the required entry had evaporated.

the 370 architecture redbook had some number of instructions scenarios where the instruction could be abrogated, but that gets difficult for instruction that is incrementally doing storage modifications.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A second look at memory access alignment

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A second look at memory access alignment
Newsgroups: comp.arch
Date: Tue, 07 Jun 2005 10:23:39 -0600
just for completeness ... from 360/67 functional characteristics (obtained from bitsavers.org), a27-2719.

in Instruction Timings section, starting pg. 43

... there are mentions for some special instruction timings but no mention of the dry run for TR (some amount on EDMK) ... as in the Z/architecture & esa/390 PoP.

there is on pg. 44,
7. Instructions times for STM, STMC, LM, LMC, VFL Decimal, and Logical SS format instructions are modified as follows. When translation is indicated, the beginning and ending address of each field is tested by the translation process before instruction execution. Execution times is exposed to one of the following increments of additional time.

...

and TR from pg. 53 of Instruction Timings section
Model 67-1 Model 67-2 Instruction Form Mnemonic Times (us) Times (us) Translate SS TR 1.94+1.78N 2.06+1.78N

there is 2nd level detail on the above on pg. 55

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Tue, 07 Jun 2005 12:35:16 -0600
pg_nh@0502.exp.sabi.UK (Peter Grandi) writes:
This reminds me (very bemusedly) of a consulting assignment I had once trying to help a large oil company sort out their repository of drilling data tapes.

Several hundred thousand tapes, housed in a large dedicated building, and as I investigated the circumstances it turned out that nobody had much of an idea what was the format of data on about a third of those tapes :-). Fortunately my assignment was about as-is media migration, not trying to reverse engineer them. :-).

I have always been a believer in putting the metadata and/or documentation with the data too, just in case :-).


at the ieee metadata meetings we were going to in the early 90s ... there were fairly good representation from the oil industry.

at one point somebody from the oil industry had said that they had a project to define metadata for oil industry ... hoping that it would be used for recording as the data was recorded ... aka the Bob Adair model in the 60s at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

one big issue was trying to use an RDBMS as a repository for the oil industry metadata description. now most RDBMS have been pretty good at representing relatively uniform and homogeneous information ... minor collections of posts about origins of RDBMS and SQL
https://www.garlic.com/~lynn/submain.html#systemr

however, RDBMS can quicly get convoluted when trying to represent non-uniform and/or non-homogeneous information. they mentioned using something like 995 tables for representing the oil industry metadata in RDBMS form.

NIH NLM has had a somewhat similar activity with UMLS
http://www.nlm.nih.gov/research/umls/about_umls.html

there was some joke from the early 90s about it taking 18 months elapsed time attempting to (relational) "normalize" new metadata changes corresponding with 9 months of new/changed medical knowledge.

slightly related is the glossary/taxonomy work at
https://www.garlic.com/~lynn/index.html#glosnote

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public disclosure of discovered vulnerabilities
Newsgroups: sci.crypt,comp.arch
Date: Tue, 07 Jun 2005 12:42:50 -0600
"Hank Oredson" writes:
Oh my. Did you know there was computing before bugtraq? Are you aware that there were bugs (and exploits!) prior it it? You are talking about NEW stuff, we are talking about OLD stuff. OLD as in 40 years ago. The early 60s. For some odd reason nobody posted those to bugtraq ...

in a similar thread from the end of 2004 thru early 2005 ... I raised the subject of the air force evaluation of multics system which appeared to never had a buffer overflow exploit.
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation

a buffer overflow posting collection
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch.arithmetic, comp.arch, alt.folklore.computers
Date: Tue, 07 Jun 2005 12:35:16 -0600
Subject: Re: Where should the type information be?
Peter Grandi writes:
This reminds me (very bemusedly) of a consulting assignment I had once trying to help a large oil company sort out their repository of drilling data tapes.

Several hundred thousand tapes, housed in a large dedicated building, and as I investigated the circumstances it turned out that nobody had much of an idea what was the format of data on about a third of those tapes :-). Fortunately my assignment was about as-is media migration, not trying to reverse engineer them. :-).


I have always been a believer in putting the metadata and/or > documentation with the data too, just in case :-).

at the ieee metadata meetings we were going to in the early 90s ... there were fairly good representation from the oil industry.

at one point somebody from the oil industry had said that they had a project to define metadata for oil industry ... hoping that it would be used for recording as the data was recorded ... aka the Bob Adair model in the 60s at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

one big issue was trying to use an RDBMS as a repository for the oil industry metadata description. now most RDBMS have been pretty good at representing relatively uniform and homogeneous information ... minor collections of posts about origins of RDBMS and SQL
https://www.garlic.com/~lynn/submain.html#systemr

however, RDBMS can quicly get convoluted when trying to represent non-uniform and/or non-homogeneous information. they mentioned using something like 995 tables for representing the oil industry metadata in RDBMS form.

NIH NLM has had a somewhat similar activity with UMLS
http://www.nlm.nih.gov/research/umls/about_umls.html

there was some joke from the early 90s about it taking 18 months elapsed time attempting to (relational) "normalize" new metadata changes corresponding with 9 months of new/changed medical knowledge.

slightly related is the glossary/taxonomy work at
https://www.garlic.com/~lynn/index.html#glosnote

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Public disclosure of discovered vulnerabilities

From: lynn@garlic.com
Newsgroups: sci.crypt, comp.arch
Date: 8 Jun 2005 07:48:14 -0700
Subject: Re: Public disclosure of discovered vulnerabilities
Vernon Schryver wrote:
30 years might be stretching it, but I seem to recall some of those from before 1985.

story about buffer overflow from some code i wrote getting close to 40 years ago. i had added TTY terminal support to cp67. it was all written in assembler and the underlying infrastructure conventions had explicit lengths associated with everything. i thot i was being smart using 1byte operations for some of the values used in calculating lengths (because tty terminals were something like 72 or 80 chars ... so 255 should be more than enuf).

a couple years later somebody at mit was hooking up an emulated tty terminal that was something like a plotter or some such thing (located at harvard). anyway they changed some of the max. length values that corresponded to the plotter (1200?) but didn't catch my tricky 1 byte operations. this is slightly related to previous post on multics ... since that was about the time i migrated to 4th flr ... and the multics group was on the 5th floor.

following story (from the multics history site) about cp67 system crashing 27 s times in one day at the mit urban systems lab.
https://www.multicians.org/thvv/360-67.html

cp67 had a fair amount of attention to security. in the late 60s some of the people broke away and formed a couple time-sharing service bureaus ... and they had to provide fairly high level integrity and confidentiality for their various corporate clients (many in the financial industry).
https://www.garlic.com/~lynn/submain.html#timeshare

also the science center started offering some time-sharing services to corporate hdqtrs people who were processing the most sensitive of corporate data ... while at the same time allowing various bu, mit, harvard, etc students access to the same system
https://www.garlic.com/~lynn/subtopic.html#545tech

Public disclosure of discovered vulnerabilities

From: <lynn@garlic.com>
Newsgroups: sci.crypt,comp.arch
Subject: Re: Public disclosure of discovered vulnerabilities
Date: Thu, 09 Jun 2005 07:28:33 -0700
Larry Elmore wrote:
I'm not sure I understand the logic here. The fact that many major apps that were developed in assembly didn't suffer from these errors (even though these errors are easier to make in assembly) doesn't really tell us anything except that if one is careful enough and devotes enough time and attention to it, good results can be achieved even with relatively crappy tools. That does *not* mean better tools won't make it easier to get equally good results. How many more major apps were developed in assembly language _did_ suffer from these errors? Then how do those numbers compare with C or Ada or Lisp or whatever?

my assertion previoulsy in this thread and many other similar threads
https://www.garlic.com/~lynn/subintegrity.html#overflow

that some large assembler based implementations didn't suffer from common c-language implemented buffer overflows ... was because the underlying infrastructure had convention that carried explicit lengths with every buffer ... and common buffer operations tended to take advanage of this information .... it minimized the requirement that the programmer had to manually manage the administrative information regarding buffer lengths. it wasn't whether or not the programmer was able to shoot themselves in the foot, it was a matter of the amount of implicit information that programmers were expected to manually track.

in other postings in other threads ... i pointed out that such environments were prone to register management problems ... where programmer was reponsible for manually maintaining the administrative information regarding current register contents ... and there were higher frequency of errors where registers were used under the implicit assumption about the current register value ... and they were wrong. frequently contributing factor was complex path flows ... where infrequently used paths might not have established register contents that was expected for later use. in the buffer length scenario ... it is possible for underlying infrastructure to manage otherwise implicit buffer length information and contribute to drastic reduction in buffer length errors ... regardless of the programming language. by comparison, assembler tends to have requirements for programmer manually managing administrative implicit information of current register contents at any moment ... and as a result tends to have higher frequency of register content management related errors.

almost all environments where buffers are dynamically allocated and released ... seem to suffer from implicit programmer manual administrative management failures related to dangling pointers ... program reliance on pointers that point to storage areas that are no longer what they are assumbed to be.

whenever there is increased dependency of programmer manual management of some implicit admnistrative detail ... there is increased frequency of related failures.

virtual 360/67 support in cp67

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers
Subject: Re: virtual 360/67 support in cp67
Date: Thu, 09 Jun 2005 21:47:10 -0700
Simon Marchese wrote:
Well I certainly remember that running VM/370 under VM/XA was do-able and in fact tactically necessary when the 370/XA architecture came out.

well ... the original post
https://www.garlic.com/~lynn/2005j.html#38 Virtual 360/67 support in cp67

said that the support for virtual 360/67 (allowing cp/67 to run under cp/67) was in large part the work of the person ... who many years later was in charge of heading up the earn networking effort
https://www.garlic.com/~lynn/2001h.html#65
and
https://www.garlic.com/~lynn/subnetwork.html#bitnet

one of the projects after providing virtual 360/67 support ... was then to provide virtual 370 support (under cp67 running on real 360/67) ... including 370 address relocation support (which hadn't been announced at the time). The virtual 370 support (with address relocation support) was in regular production use a year before the first enginneering 370 with real hardware virtual memory support was operational. some past posts about cp67h & cp67i systems:
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?

Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: re:  Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Date: Thu, 09 Jun 2005 23:45:32 -0600
Newsgroups: bit.listserv.vmesa-l
Jim Bohnsack on Thu, 9 Jun 2005 22:09:09 wrote:
That sounds like the marketing stuff I heard when I joined IBM in early 1967. "Scatter read and gather write". It could be done by chaining CCW's and maybe that was the point, i.e. to be able to show that it was technically possible to do it vs. whatever the competition could or could not do.

3 people came out from the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and installed cp67 at the univ. the last week, jan. 1968.

at that time cp67 did single page at a time transfers (on 2311 disk, 2314 disk, and 2301 fixed head drum) and fifo queuing.

2301 drum had 4mbyte capacity and cp67 would drive it peak about 80 page io/sec. in some sense the 2301 drum was a 2303 drum ... that was modified to read/write four heads in parallel ... getting four times the data transfer rate of a 2303 drum (about 1.5mbytes/sec).

i modified the paging disk and drum io to do ordered seek queueing and dynamically chained i/o requests into single i/o of pending requests. the request chaining got the 2301 to peak at 300 page io/sec ... rather than about 80. the ordered seek queueing also (and chaining of queued request for same cylinder) improved 2314 thruput.

i also redid the page replacement selection algorithm to use global clock-like global LRU
https://www.garlic.com/~lynn/subtopic.html#wsclock

it had been scan memory for first virtual page that didn't belong to an "in-q" virtual machine ... and if it didn't find one (after complete scan) ... then took the next available virtual page

in the mid-70s most systems starting transitioning from being primarily memory and processor constrained to being i/o constrained. at one point i made the observation that disk relative system performance had declined by a factor of ten times over a period of years. this somewhat got the disk division to assign their performance modeling group to refute the claims ... after a couple months they came back and observed that i may have slightly understated the problem.

in the 80s time frame, "big pages" were introduced first in mvs and then in vm. big pages were 10 page clusters that fit on a full 3380 track. this was coupled with a moving cursor allocation algorithm. large 3380 areas were allocated for paging (possibly ten times more than would normally expected to be required). virtual pages for the same virtual machine would be gathered together and written out in ten page clusters. when any page in a "big page" was required ... all ten pages would be fetched and the disk space would be deallocated. writing out a big page ... would select the available, empty track closest to the current arm cursor position. The cursor tended to sweep across the disk surface ... with high allocation just behind the cursor and mostly empty just in front of the cursor travel. "big pages" might increase the total page transfers by possibly 30 percent compared to doing a single page fault at a time transfer. however, it would do it in one disk access. the trade-off was that disk arm accesses were the bottleneck ... traded off against increased data transfer and increased real storage utilization.

misc. past posts about the decline in disk relative system performance
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?

misc. past "big page" posts
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries

Banks

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Banks
Newsgroups: bit.listserv.ibm-main
Date: Fri, 10 Jun 2005 19:55:46 -0600
R.Skorupka@ibm-main.lst (R.S.) writes:
In other words dozens of people know my social security number. These people can collect such information. Are the people honest ? ALL of them ?

there was a study last year that found as much as 70 percent of such fraud involves insiders

postings on shared-secret authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#secrets

harvesting personal information & shared-secrets for id/account fraud
https://www.garlic.com/~lynn/subintegrity.html#harvest

general fraud postings
https://www.garlic.com/~lynn/subintegrity.html#fraud

misc. past posts mentioning insiders and fraud
https://www.garlic.com/~lynn/aadsm14.htm#28 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aadsm17.htm#38 Study: ID theft usually an inside job
https://www.garlic.com/~lynn/aadsm17.htm#39 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#29 EMV cards as identity cards
https://www.garlic.com/~lynn/2001j.html#54 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2004i.html#5 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2003g.html#26 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2004i.html#5 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#16 New Method for Authenticated Public Key Exchange without Digital Ceritificates
https://www.garlic.com/~lynn/2004j.html#15 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#37 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2005g.html#37 MVS secure configuration standard
https://www.garlic.com/~lynn/2005i.html#1 Brit banks introduce delays on interbank xfers due to phishing boom
https://www.garlic.com/~lynn/2005i.html#11 Revoking the Root

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Banks

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Banks
Newsgroups: bit.listserv.ibm-main
Date: Fri, 10 Jun 2005 20:51:39 -0600
DASDBill2 writes:
An epidemic of identity theft is sweeping the U.S., but it hasn't spread to Europe. Here's what they're doing right.

a lot of stuff labeled identity theft is actually account fraud ... somebody obtains your CC account number and performs a fraudulent transaction.

old standby posting about security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Date: Sat, 11 Jun 2005 08:03:35 -0600
Newsgroups: bit.listserv.vmesa-l
Kris Buelens on Fri, 10 Jun 2005 1:46:26 wrote:
Not that I don't think CP's paging I/O is worse than ordinary disk I/O, but you forget some effect of saved segments when comparing CMS and CP I/O: - when you use a CMS program: the whole program must be read in storage before you can start. - as only user of a saved segment: only the program sections you really use are brought in storage. Initially nothing is in storage, so you start with a page fault, CP brings some pages in from the NSS in the spool, you run a bit until another page fault happens, etc.

I had started the advanced virtual memory management stuff on cp67 and then ported to vm370 late in the vm370 release 1 timeframe. it had both advanced shared segment support as well as a page mapped filesystem for cms. eventually, the vm370 port was installed at a number of internal locations ... including HONE
https://www.garlic.com/~lynn/subtopic.html#hone

vm370 had originally adapted rather primitive mechanism from cp67 with the system named table ... basic kernel defined names & page mapping ... in dedicated section of system page formated area for saved/shared segment definition. required system privileges to initialize the area ... and the same information was shared across all users of the system. it was also only available via the "IPL" command (ipl'ing a named system).

HONE had become the world-wide support for all field, sales, and marketing ... that was predomitely using applications that had started out being delivered in cms\apl (on cp67 platform). they migrated to vm370 and apl\cms. To improve performance ... most of the apl interpreter had been configured for shared segments ... and to get shared segments in that early period ... a custom IPL named system had to be defined that included the apl interpreter pages. This was all set up to happen automatically when the salesmen logged in ... and placed them in a custom, tailored environment written in APL.

There was performance problem with some of the more detailed configurators that had been written in APL that were taking enormous amounts of processor time. These were rewritten in Fortran and got an order of magnitude throughput speedup. The problem was how to get a shared segment APL (that had been ipl'ed by name) to invoke a Fortran program ... make the APL shared segments go away, drop into a normal (ipl by name) CMS, execute the fortran program, and then re-enter (the ipl by name) APL environment (and retrieve the results) ... all transparent to the salesman (w/o them having to type anything).

So part of the advanced virtual memory management functions that i did was page mapped filesystem for CMS
https://www.garlic.com/~lynn/submain.html#mmap

and modified CMS genmod/loadmod kernel functions. On genmod ... there was an option to specify what, if any, segment page ranges (of the cms module) should be loaded with the shared segment specification. At load time, loadmod would using the page map interface to the cp kernel to map the pages in the cms filesystem to virtual memory ... along with any appropriate segment sharing specifications. the page map interface also had a bunch of glitzy performance advisery options .... about 1) prefetching all pages and don't return until the prefetch is over (synchronous), prefetching all pages ... but return immediately ... allowing asynchronous behavior (execution would be appropriately serialized ... if execution needed a page that was still in the process of being fetched ... it would be serialized with standard page fault processing) ... 3) just perform mapping and return (no prefetch). under the covers, the kernel would dynamically decide what to actually do about the advisery based on existing configuration and load.

Of course there were also some tricks under the cover how to guaranetee the unqueness and integrity of "shared segment names" across all virtual address spaces.

The result was that HONE could have a normal CMS name system that was automatically ipl'ed by name at login. They then could invoked an apl interpreter from a page mapped filesystem ... getting shared segments for the apl interpreter as needed. The apl interpreter could exit, invoking fortran programs and reenter ... where the shared segment processing was now just a natural part of cms program invokation/termination.

I then did a hack where the CMS ipl kernel was on a page mapped filesystem area ... and the IPL processing could specify what part of the kernel being loaded was to have shared segments .... eliminating requirement for the kernel named table even for ipl-by-name processing.

There was also some tricks with relocatable/floating shared segments ... where the same exact shared segment could appear at different/arbritrary virtual addresses in different virtual address spaces. This was harder to do because of the standard 360 compiler convention of having absolute adcons (they are called relocatable adcons ... but the relocation function occurs as part of TXT & RLD record processing by the loader ... the executable image has fixed adcons). lots of posts discussing the fixed adcon problem in r/o shared segments that can occur at arbritrary virtual addresses
https://www.garlic.com/~lynn/submain.html#adcon

The vm370 group was out looking for lots of new stuff to incorporate into vm370 release 3 and decided that they needed to expand the shared segment support. The cms group picked up a lot of the advanced virtual memory management support somewhat as is (but w/o any of the page mapped filesystem support) ... this included various rewrites of CMS code so that it could operate in shared segments (rewrite of the standard CMS editor to remove things like internal variables areas that couldn't be modified if the area was located in read-only protected shared segment). Since the paged mapped filesystem was being picked up ... the advanced shared segment process then had to be remapped into the standard, existing kernel ipl-by-name tables. A new "diagnose" instruction was added to interface to the named table capability (mapping stuff from a named table to virtual memory ... w/o also invoking the ipl function).

This drastically reduced subset w/o the page mapped filesystem support was released as discontiguous shared segments.

The group was still out in burlington mall location ... but not too long afterwards ... there was the next round of decisions to kill the VM product and shutdown the burlington mall location. All of the group was needed in POK to work on the VMTOOL ... an internal only virtual machine support infrastructure that was required for mvs/xa development ... as was never considered for release. Eventually endicott lobbied to retain the vm/370 product mission (as opposed to completely killing the vm370 product) ... but pok still required that most of the product group people had to move to pok to support the vmtool and mvs/xa development. Some number of the people decided not to move and left ibm and joined other computer companies in the area (there was a joke that whoever made the decision to kill the burlington mall location, made a very significant contribution to DEC VMS).

after long hard road ... with efforts by a lot of people ... the vmtool group that supposedly was only there to support mvs/xa development ... were finally allowed to release vm/xa. vm/370 had rewritten large amounts of code that had been cp67 ... and vm/xa was a large code difference from what had been in vm/370. One of the things that vm/xa did was to moved the kernel defined name system table and named system areas into the spool file system.

the page mapped filesystem for cms with all the advanced fancy bells and whistles for performance and feature/function was never part of a standard release. A abbreviated version of the page mapped filesystem did make a special release in pc/xt/at vm .... because of the server real storage constraints and limited spaced that CMS had to operate in. because the real memory page constraints were so severe on the xt/at/pc ... being able to simply page-map CMS applications ... w/o having to go thru all of the pain of completely pre-reading them into virtual memory (before starting execution) showed a big pay-off.

standard page mapping .... w/o performance hints ... was one of the big performance hits on tss/360. If you simply do a page maping and then synchronously page fault one page at a time .... to get in a 1mbyte program would require 256 separate, synchronous page faults (with individual waits ... and possible system and disk arm queuing delays associated with each one). Normal CMS I/O program loading would typically try and preload the 1mbyte application in 64byte chunks at a time (and execution couldn't start until the whole thing was in core). Depending on a lot of factors ... prereading a mbyte in 64kbyte chunks ... can be significantly more efficient than synchronously doing 256 separate 4k reads. The cms page mapped filesystem attempt to avoid the severe performance mistake of the tss/360 page mapped filesystem (and the standard named system operation) ... by dynamically adapting the fetch process to configuration and load (potentially even dynamically creating full cylinder reads if the conditions were appropriate)

The big byback from the named system stuff ... normally isn't the single page fault at a time fetch (which is typically performance degradation) ... but that shared segments are involved ... and the pages may already be resident in real memory because of activity in other virtual address spaces.

there were also other bits and pieces under the covers ... since there were no ccw translation and the kernel under the covers was doing all sorts of dynamically adapting the execution ... it also eliminated some of the integrity holes ... like letting a virtual machine define a looping channel program. since it was paged mapped ... it could also do various kinds of caching, sharing, and other magic under the covers (like 3rd party redirection as to the source of the pages .... simplest implementation for page mapped filesystem was one-to-one mapping between a page and a disk location ... but it could also get quite a bit more complicated).

Back as undergraduate
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

in addition to the previously mentioned work ... I had done extensive pathlength optimizations to speed up the operation of OS/360 MFT (and then MVT) under cp67. I then started looking at CMS ... a big pathlength item in CMS was disk i/o CCW translation. Investigating CMS ... it always did disk I/O synchronously ... aka after doing the SIO for disk I/O it immediately entered wait state for the disk I/O to complete. Furthermore the disk CCWs were extremely predicatable and never varied (in part because CMS filesystem was effectively emulating fixed-block architecture ... even from the start in mid-60s when it only at CKD disks at its disposal). I invented a new kind of ccw op-code x'255' that was defined to to the seek/search/write/write operation and be "synchronous" .... aka the SIO wouldn't complete until the disk i/o had completed ... and then it would simulate CC=1, csw stored for the SIO. This drastically reduced the pathlength for support cms disk I/O (overhead of doing regular disk ccw translation ... along with all the pathlength of simulation various instructions and other operations in cms).

This ran afoul of Bob Adair at the science center because it violated the virtual machine architecture. It was doing something that wasn't defined in the real 360 architecture. The "rule" became that cp67 could deviate from the 360 principles of operation (and 360/67 functional characteristics) only to the extent that something was defined as model dependent in the PoP and architecture documents. Practically this become extensive use of the "diagnose" instruction, because the "real" diagnose instruction was defined as being a model dependent implementation. Any features that cp67 provided to virtual machines that deviated from the 360 PoP/architecture/etc ... had to be done thru the "diagnose" instruction ... under the principle that cp67 virtual machines were their own specific kind of 360 real machine model. This resulted in my x'255' ccw opcode being remaped into cms diagnose I/O.

misc. past posts on mft and other pathlength work.
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2005b.html#41 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005g.html#56 Software for IBM 360/30

Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Date: Sat, 11 Jun 2005 08:26:26 -0600
Newsgroups: bit.listserv.vmesa-l
minor page mapped filesystem addenda ... in addition to the genmod/loadmod modifications for defining/supported shared segment definition as part of the standard cms executable page mapped filesystem ... there was another minor hack to the filesystem block allocation ... that allowed for requesting contiguous record allocation ... if it couldn't get it contiguous ... it would try non-contiguous ... but return a code to the caller. genmod attempted to do contiguous allocation for program executables ... and then loadmod could specify pagemapped loading advisery that everything should be fetched asynchronously. then cp kernel might simulataneously generate page fetch for every 4k page in the executable (modulo configuration and load/contention issues) .... and if the records had all be allocated contiguously (single contiguous area ... or a few number of contiguous areas ... possibly depending on the size of the application) ... then the actual load operation might happen in a single physical i/o.

The normal cms file system might be able to do 64kbyte I/O reads if the records happen to be written out sequentially contiguously. Note that in the page mapped case ... there was the underlying magic ... that it would optimize the fetch to how contiguous the records happen to be ... regardless of whether they are sequentially contiguous or not.

Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Date: Sat, 11 Jun 2005 09:00:28 -0600
Newsgroups: bit.listserv.vmesa-l
oh, what the heck ... another page mapped filesystem addenda
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#55 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

the original code had started on cp67 with the cms (cp67) original filesystem ... which required a bit of finagling to fake out 800-byte oriented blocks in a 4k block environment. at the other end of the spectrum ... from large contiguous file loading ... were the small file operations.

i did the hack that if it was a single (4k) block file ... instead of the file directory entry pointing to a hyperblock that pointed to the actual file record ... the page mapped filesystem had the hack that pointed the file directory entry directly at the actual file record (special case for small files eliminating the intermediate hyperblock record).

the introduction of the edf filesystem had 4k block option native ... and also some more games with optimizing the filesystem usage (there was still hacks about page mapped stuff needing to be page aligned ... and whent they weren't there was the overhead of moving stuff around).

overall the page mapped filesystem work wasn't to totally rewrite all of the cms filesystem infrastructure ... but to leverage as much of the existing filesystem structure as possible ... just doing the necessary morphing for page mapped operation (and numerous performance tweaks here and there).

Much later at sjr, I had done the file backup/archive system that was deployed in various internal locations ... including hone
https://www.garlic.com/~lynn/subtopic.html#hone

i started out with a lot of modifications to vmfplc2 for data to/from tape. base vmfplc2 wrote a 64byte record of the file directory entry ... followed by one or more 4k data records (of the actual file). on 6250bpi open reel tape ... this resulted in enormous (inter-gap) waste for small files. so one hack was to merge the file directory appended to the first data record. For large files, physical blocking was one or more 4k data records in a single physical tape block (further reducing waste of inter-record gaps).

It didn't make any difference for non-page-mapped filesystems ... but if it happen to be pagemapped filesystem .... the vmfplc2 hacks were done so all the data record buffers were force to page-aligned. An the merge of the 64byte directory data was at the end of the first data block for a file.

So reading a tape ... it was something like N*4k+64byte CCW read with SILI set on. The physical tape record read could be 1-N 4k records ... with or w/o an appended 64byte directory data. The residual count from the I/O operation would indicate whether or not there was an exact multiple of 4k records read and/or whether or not an appended directory data had been read. If there was an appeneded directory read ... then it indicated the start of a new file.

appending the 64 directory data to then end of the first block of data records reads (instead of prepending the directory data first) ... met that the data records were always aligned on the 4k page boundary and never offset by 64bytes. The directory data could be found at the appended end of the data just read ... but the data blocks were on page aligned boundaries. Then when the "write" was done .... if it happens to be a page mapped filesystem .... could be done as a direct asynchronous page out operation.

The routine was modified to have double buffering ... which had no effect if it was dealing with a normal cms filesystem .... but if it was dealing with an underlying filesystem ... the asynchronous operations would allow disk (page) transfers to go on asynchronously with tape I/O operations (w/o requiring cms to have explicit serialization/syncronouzing support ... it would happen under the covers with page faults).

It was transparent to the backup/archive application whether it was running with a page mapped filesystem or a vanilla filesystem ... but the tweaks were done in such a way that if a page mapped filesystem was involved ... the performance was significantly improved (page aligned asyncronouus operation with double buffering overlap).

The backup/archive system then went thru a number of internal releases
https://www.garlic.com/~lynn/submain.html#backup

and was finally released to customers as workstation datasave ... which subsequently morphed into adsm and is now been renamed tsm.

Ancient history

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient history
Newsgroups: sci.crypt,comp.arch
Date: Sat, 11 Jun 2005 09:30:23 -0600
"Jon A. Solworth" writes:
The situation with C-related bugs is, I believe, analagous to that of covert channels. Covert channels were known in the public literature since 1973 when Butler Lampson wrote "A note on the confinement problem". It is believed to be known before that by certain government agencies. But the mapping out of the types of covert channels, the theoretical structure (non-interference), and determining systematic methods for their analysis and measurement took decades. See J. Millen in 1999's oakland statement "20 years of covert channel modelling and analysis" (A one page, largely non-technical, very humorous statement).

a post i thot i made (while on the road) may have gotten lost in some process ... i did my normal archive process
https://www.garlic.com/~lynn/2005j.html#49 Public disclosure of discovered vulnerabilities
and it followed
https://www.garlic.com/~lynn/2005j.html#48 Public disclosure of discovered vulnerabilities

but subsequent checking ... never had it show up.

basically human administrative manual operations are subject to mistakes, C buffers have implicit lengths that are open to human management and mistakes. Other environments ... including purely assembler-based infrastructure have had buffers with explicit associated lengths and normal infrastructure operations (whether assembler implemented or in any other language) make regular use of the explicit length fields in their normal operations.

As a result ... there are drastically fewer buffer length exploits in this environments (compared to c-language environments) ... even when they involve exclusive assembler implementation. In theory, assembler offers a huge number of additional opportunities for programmers to shoot themselves in the foot (compared to c) ... but for environments with explicit buffer lengths as part of the normal infrastructure ... buffer related mistakes are drastically fewer ... even for pure assembler programming environments.

What you do see in pure assembler programming environments is a lot of reguster management vulnerabilities. assembler forces the programmer to manage register contents (in much the same way that c force the programmer to manage buffer lengths). common failures in assembler involve registers having incorrect contents .... which can really be manifested when the contents are suppose to be pointers. frequently anomolous code paths are involved ... when specific code paths have not sufficiently prepared register values for later use (under manual programmer administrative control). for contrast ... in the c language environment ... incorrect register value related failures are significant fewer ... because the c language programmer is rarely involved in manually managing register content values.

common to many environments where there is programmer administrative manual management of buffer allocation/deallocation is the problem of dangling pointers. this is similar, but different to the assembler environment of incorrect management of register values. In the dangling pointer case ... the vulnerability is programmer manual administrative management of pointers synchronized with buffer allocation/deallocation ... where pointers can incorrectly remain in use after the associated storage/buffer areas has been deallocaetd.

in the typical physical world threat/response model ... when various types of mistakes have become especially prevalent (as in the case of c language related buffer overflow exploits) ... effective conpensating processes are typically created ... to eliminate the wide-spread occurance of such problems.

lots of past posts specifically related to the extensive buffer overflow exploits in c language environments
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Date: Sat, 11 Jun 2005 11:32:01 -0600
Newsgroups: bit.listserv.vmesa-l
I guess since I started ... might as well do another.

one of the side projects was something called high speed data transport (HSDT) .... it was sort of an internal joke.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

the project was subcontracting for some stuff from a couple companies in japan. one friday, somebody from raleigh distributed an announcement about a new discussion group/mailing list on high speed communication. the announcement included the following definitions:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbits

The following Monday morning on the wall of a conference room in Japan was:
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits

now later on in hsdt ... one of the things i did put rfc1044 support into the standard mainframe tcp/ip support:
https://www.garlic.com/~lynn/subnetwork.html#1044

the standard mainframe tcp/ip support was getting something like 44kbytes/sec aggregate thruput and using 100percent of a 3090 processor doing it. some tuning on rfc1044 support at cray research between a cray and a 4341-clone was getting 1mbyte/sec effective thruput using only a modest amount of 4341-clone processor (about 23 times the bytes/sec using less than 1/20th the processing pathlength ... something like 500 times higher efficiency).

although we had done all this backbone work ... we weren't allowed to bid on nsfnet backbone ... although a subsequent technical audit by nsf claimed what we had running was at least five years ahead of all nsfnet backbone RFP responses to build something new ... minor references
https://www.garlic.com/~lynn/internet.htm
https://www.garlic.com/~lynn/subnetwork.html#internet

anyway ... earlier in the hsdt project ... one of the activities was trying to integrate the standard vnet networking support into the hsdt backbone operation. the internal network was primarily vm vnet backbone nodes and was larger than arpanet/internet from just about the start until around mid-85 ... minor references
https://www.garlic.com/~lynn/subnetwork.html#internalnet

part of the issue was a vnet node used the vm spooling system for all its data storage ... including intermediate node store&forward. Now the standard vm spooling interface was thru emulated unit record i/o. In part for vnet ... a special diagnose was defined that did synchronous read/write 4k spool record operations. The problem was that since it was synchronous (vnet virtual machine blocked while transfer was in operation) to the standard spool system ... a diagnose I/O spool request would go on the queue ... just like from any other virtual machine. Spool system might be doing maybe 50-60 4k requests/sec. However, because of the synchronous & queuing processing ... the vnet virtual machine might be getting only 5-6 4k requests/sec (lucky if it got 20k-30k bytes/sec thruput ... maybe 200-300kbits/sec aggregate).

An hsdt backbone could easily be doing several local 50mbit/sec interconnects and multiple full-duplex T1 longer haul links (1.5mbits simultaneous in each direction, 3mbits aggregate per T1 link). Reasonable aggregate sustained loading thruput could easily be a couple mbytes/sec or 20-30mbites/sec ... a good hundred times what the typical vm spooling interface was capable of deliverying.

so i undertook to rewrite the spooling subsystem, written in vs/pascal and running in a disconnected service virtual machine ... with the objective of supporting the standard vm spooling operations ... but also capable of providing vnet virtual machine node with several mbytes/sec effective thruput capacity.

So the requirements were very similar to optimization stuff done for the cms page mapped system ... structure that support contiguous allocation and capable of multiple block reads/writes for the same file (the standard file system had strictly link chaining ... you didn't know the location of the next spool file 4k block until after you had read the previous 4k spool file block in a chain). Furthermore, the interface needed to allow asynchronous, overlapped operation ... vnet virtual machine needed to be able to concurrently service multiple spool file operations while at the same time interfacing to a large number of i/o interfaces going to external processors (both local channel-to-channel like operations as well as much longer long haul operations) ... w/o being held non-executable while a spool file operation was eventually getting serviced (possibly at the back of the queue of a large number of other spool file requests from other applications running on the same real system). The interface to vnet ... had a large number of simularities (and shared some of the same code) to the kernel support for cms paged mapped filesystem. However, the administrative management and allocation was all implemented in a service virtual machine.

Furthermore ... since vnet was doing store&forward of unprocessed 4k spool file blocks ... it had to accept incoming spool file blocks from unmodified vm systems (with standard vm spool systems) ... and the outgoing spool file blocks needed to be processable by unmodified vm system (again with standard vm spool systems).

the mainstay hsdt backbone systems had to have the enhancements ... since it needed. w/o the spool file enhancements a single vnet node was typically limited to aggregate of 200-300kbits/sec. A single T1 link required aggregate thruput processing of 3mbits/sec (full-duplex T1, 1.5mbits in each direction) ... at least ten times larger than what was available. Even a vnet node with multiple full-duplex 56kbit links (each 112kbits/sec aggregate) could easily saturate the thruput capacity of the standard spool file interface.

So a starting point was an asynchronous spool file interface that could read/write up to at least eight 4k contiguous blocks in one operation.

Part of this ... is at the next lower level in the CP kernel ... the spool file record processing and the paging record processing merged into the same shared code path. This has been one of the reasons that (standard 4k) "paging" has been able to "overflow" into the spooling area ... in effect both spooling and paging are implemented using common 4k record read/write processes.

misc. past posts on spool file system and/or spool file system rewrite:
https://www.garlic.com/~lynn/94.html#30 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001h.html#8 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#13 VM: checking some myths.
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001l.html#25 mainframe question
https://www.garlic.com/~lynn/2002k.html#25 miscompares per read error
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004m.html#33 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#3 History of C

Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: Sat, 11 Jun 2005 12:29:31 -0600
Subject: Re: Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
Newsgroups: bit.listserv.vmesa-l
Anne & Lynn Wheeler wrote:
the project was subcontracting for some stuff from a couple companies in japan. one friday, somebody from raleigh distributed an announcement about a new discussion group/mailing list on high speed communication. the announcement include the following definitions:

low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbits


re:
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

small footnote on high speed communication ... the 37xx boxes only supported up thru 56kbits. raleigh did a study that claimed that customer uptake of T1 links was at least 5-10 years away. Conclusion was possibly because the only product that they had supporting T1 links was the extremely aged 2701 ... which was pushing 20 years old. There was a custom Zirpel T1 card for FSD selling for gov. RFPs ... but that fit into a series/1 ... not a raleigh product. The other possible reason they came up with the conclusion was the methodology.

37xx had support for fat pipes, a customer could put in multiple parallel 56kbit links and the 37xx would attempt to manage as a single higher speed link by interleaving traffic in parallel. The methodology for the conclusion surveyed customer installed fat pipes ... aka number of fat pipes with two parallel 56kbit links, number with three parallel 56kbit links, etc. They didn't find any with more than five parallel 56kbit links and therefor concluded that customer demand for more than 5*56kbit capacity was quite some time away.

The problem with the raleigh methodology, was this was in the period were telcos were starting to tariff a single T1 at about the same price as 5-6 individual 56kbit links. After 2-3 56kbit fat-pipe links ... customers were finding it easier and more cost effective to install a full T1 link and have it supported with a non-raleigh product (typically from some other vendor). Somebody claimed at the same time raleigh was doing its flawed analysis (that included claiming mainframe T1 links didn't exist), they did a trivial survey that took less than a week ... and found over 200 customer installed T1 mainframe links (all being supported by non-ibm interconnect products) ... and this didn't even include any of the internal HSDT high-speed links
https://www.garlic.com/~lynn/subnetwork.html#hsdt

it was some of these activities ... that contributed to one of the senior disk divison people doing a presentation at a world wide, annual internal raleigh conference ... where they started out claiming that the head of the communication group was going to be personally responsible for the demise of the disk division ... related past posts:
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004f.html#39 Who said "The Mainframe is dead"?
https://www.garlic.com/~lynn/2005j.html#33 IBM Plugs Big Iron to the College Crowd

and post posts telling the tale about the high-speed communication discussion group announcement:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#12 network history

Ancient history

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient history
Newsgroups: sci.crypt,comp.arch
Date: Sat, 11 Jun 2005 13:16:57 -0600
"Jon A. Solworth" writes:
This is pure nonsense. Systems are layered to reduce complexity. Everyone uses layered systems today, there are *no* exceptions and on a practical matter no alternative.

It is part of managing complexity. When you look at your program you do not, I presume, compute the underlying physics equations and look at every possible error which can occur. Its not remotely feasible to do so.

Layering is used to create abstraction and abstraction is a simplified--that is, inaccurate--view of what is happening.

The selection of abstractions, or interfaces, is a central task.

With the right abstractions, you concentrate on the important issues, at the expense of unimportant issues.


i've repeatedly claimed in the c-language buffer overlow case ... it isn't just the abstraction ... it is the abstraction implementation. i'm familiar with some number of pure assembler environments ... which have little or no buffer overflows. my assertion is that when a human programmer has to manage some information ... that area is prone to mistakes. In the c language environments ... one such area is storage area lengths ... with the result of mistakes that are related to storage area lengths. some number of pure assembler environments that have infrastructure convention of explicit lengths associated with storage areas have had little or none of the typical buffer length related failure modes common to large number of c language environments.

on the other hand ... most of these assembler language environments will have failures related to programmer administrative management of register contents (the programmer makes a mistake about current possible value in register). By comparison, c language environments tend to have relatively few failures related to mismanagement of register content values.

likewise ... there are some number of assembler language environments ... where the infrastructure convention requires the programmer to manage the storage area lengths (similar to what they have to do in common c-language environments) ... and there can be similar rates of storeage area length related problems (to what is frequently found in c language environments).

note that layers can increase complexity if they either have poor abstractions (like forcing programmers to manage storage area lengths) and/or there is mismatch between the abstractions supported by the different layers (which tend to require programmer manual effort to compensate for any such mismatches).

for some topic drift ... several years ago were talking to large financial transaction infrastructure ... and they attributed the main factors of their one hundred percent availability over the previous six years (compared to previously) to
• ims hot-standby
automated operator


basically ... people make mistakes ... and the fewer operations that require a real live human to make some decision ... the fewer chances you have for a human to make a mistake. automated operator eliminated a lot of opportunity for human mistakes (compared to pre automated operator period). I claimed to have done quite a bit of early automated system operator work as part of automated benchmarking
https://www.garlic.com/~lynn/submain.html#bench

when my wife was in pre-school and con'ed into going to pok to be in charge of loosely-coupled architecture ... she had come up with Peer-Coupled Shared Data
https://www.garlic.com/~lynn/submain.html#shareddata

and about the only organization that worked had with her on it (at the time) was the ims hot-standby group. the primary focus at the time were bigger and bigger mainframe iron (processors) and/or larger number of SMP processors in the same CEC.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Ancient history

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ancient history
Newsgroups: sci.crypt,comp.arch
Date: Sat, 11 Jun 2005 15:15:41 -0600
glen herrmannsfeldt writes:
In either case there is some overhead to actually doing it. That seems to be the biggest reason not to do it.

generalized bounds checking there is ... but as been demonstrated in extremely performance sensitive, purely assembler based infrastructure ... it is possible to have a pointer+length abstraction and use it majority of buffer/storage related operations with little observable overhead ... aka all operations that otherwise use some sort of length-based operations and all libraries and infrastructure operations that operate on collections of characters or bytes (as opposed to lower-level per byte/character checks).

have the explicit length for use by all existing length oriented operations ... and, in addition utilizing the length abstraction for all operations that deal with collections of bytes/characters ... goes a long way to eliminating the majority of length related failures ... with little or no measureable increase in overhead.

frequently the biggest reason for not doing it ... is it is a change; and it seems that frequently worst case performance issues are raised in support of not needing change.

frequently infrastructures will have tens of thousand instruction pathlength for dealing with a specific byte/character collection ... and adding a couple more instructions for length checking for the couple dozen operations that might involve things like copying or moving a byte/character collection from one storage area to another storage area .... will be totally lost in the noise.

such a change might not be 100 percent full proof against all possible length related programming mistakes ... but it might possibly handle 99.9999 percent of the existing buffer length associated exploits.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage collection

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage collection
Newsgroups: sci.crypt,comp.arch
Date: Sat, 11 Jun 2005 15:37:15 -0600
Bryan Olson writes:
Modern systems are way ahead of that. Garbage collection got much faster in the mid 1980's with the generational optimization, and the overhead is now typically 5%-25%. (Numbers from a couple good surveys of GC available on the net, one by Paul R. Wilson and another by Min Zhong.)

when cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

did the port of apl\360 to cms\apl in the early 70s ... garbage collection was a big problem. apl\360 had small real memory workspaces that were swapped in total (16kbytes, maybe 32kbytes in some cases).

port of the apl\360 interpretor to cms\apl involved going to large (possibly multi-megabyte) virtual memory. the same apl application that ran efficiently in 32kbyte workspace could cause horrendous paging problems in virtual memory environment.

at the time, the science center was developing a whole variety of performance monitoring, management, etc tools ... some of it evolving into things like capacity planning
https://www.garlic.com/~lynn/submain.html#bench

one of the tools that was evolving was something that eventually got release as a product called vs/repack. the tool would capture "I" and "D" refs and analyze what was going on ... including various kinds of plots. The final product also involved semi-automatic program reorder for improved operation in virtual memory environment (trying to compact weak and seemingly random virtual page requirements).

one of the things that it highlighted was that the apl\360 storage handling would always allocate new storage on any assignment ... and discard the old. when it reached the end of the workspace ... it would compact all allocated areas and reclaim all discarded space (aka garbage collection). for virtual memory environment ... it was obvious that you didn't want storage allocation to willy-nilly stepping thru total available address space between garbage collection. So for cms\apl the storage allocation and garbage collection was rewritten to drastically improve its performance in virtual address space environment. this was released to customers as cms\apl on cp67/cms (predates availability of vm370 and appearance of virtual memory availability on 370s in the early 70s).

other aspects of the tool that was eventually released as vs/repack got quite a bit of use in a number of compiler and application product groups ... as various of the mainframe operating systems started making the transition from real-storage orientation to virtual memory orientation when the availability of virtual memory for 370s was announced. in the early 70s ... some number of product groups, in addition to using it to analyze storage pattern references ... also used it for instruction "hot spot" identication (which instructions were involved in majority of the application execution) ... i.e. the candidates for investigating performance optimization rewriting.

one of the major cms\apl (and the follow-on apl\cms) service offerings was the internal HONE system that supported world-wide field, sales, and marketing people ... where majority of the applications were apl based
https://www.garlic.com/~lynn/subtopic.html#hone

misc. past postings mentioning vs/repack:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: Sat, 11 Jun 2005 16:16:36 -0600
"Stephen Sprunk" writes:
S&Ls were long gone by the time I hit the banking world, so I have to ask: did S&Ls allow loans to non-members? I know CUs don't. CUs also have very limited sets of potential members, making it difficult to gather enough money worth stealing; I don't know if the same supposedly applied to S&Ls. My CU, for instance, doesn't allow business members (and therefore no business loans), though that may have happened after the S&L fiasco.

possibly a little long winded ... but the posting on the thread between information security and risk management ... goes into some amount of detail about the S&L event
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

one of the issues in the S&L event was the reduction from 8 percent to 4 percent the capital reserve requirement ... and whole slew of S&Ls now having a huge wad of cash that they needed to place somewhere ... and the assistance they got from various quarters as to where they might be able to park such a large influx of available cash.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More on garbage

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More on garbage
Newsgroups: sci.crypt,comp.arch
Date: Sat, 11 Jun 2005 16:09:11 -0600
"Jon A. Solworth" writes:
On the contrary, there is a difference between security and reliability. Security has an intelligent adversary where as reliability deals with independent random events.

warning topic drift .... one common(?) security abstraction is PAIN:
P ... privacy
A ... authentication
I ... integrity
N ... non-repudiation


it is sometimes CAIN ... confidentiality in place of privacy ... and sometimes A is "availability" (although availability can be subsumed under integrity). Also some people might sometimes refer to the "I" as identification (but that gets into the subject that sometimes authentication and identification are frequently confused).

In any case, ... reliability can be considered both an aspect of overall integrity as well as an aspect of availability ... both fundamental to the general security environment ... modulo frequent efforts to treat security purely within the context of some sort of (human) attack or exploit.

however, when we were starting the ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp

one of the things we did was some detailed vulnerability analysis ... basically in support of RAS issues (reliability, availability, serviceability).

one of the things we predicted was a dramatic increase in the frequency of buffer length related vulnerabilities compared to other environments that we were familiar with that used explicit lengths in their abstracts (possible disclaimer at this point ... taking the side of buffer length related exploits might be construed as support for the predictions made nearly 20 years ago). at the RAS level ... it didn't make any difference whether failures were purely random and accidental or attacker induced.

in any case ... if the 911 service didn't work ... people might questions its integrity ... regardless of whether of the causes of any failures ... whether they were of purely naturally occurring random events or direct human-based attacks.

lots of past buffer length related posts
https://www.garlic.com/~lynn/subintegrity.html#overflow

misc. past postings mentioning PAIN:
https://www.garlic.com/~lynn/aadsm10.htm#paiin PAIIN security glossary & taxonomy
https://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
https://www.garlic.com/~lynn/aadsm11.htm#11 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#12 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm16.htm#11 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm16.htm#13 The PAIN mnemonic
https://www.garlic.com/~lynn/aadsm16.htm#14 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#17 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#18 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#23 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#3 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#5 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#28 Definitions of "Security"?
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/2003f.html#37 unix
https://www.garlic.com/~lynn/2003j.html#47 The Tao Of Backup: End of postings
https://www.garlic.com/~lynn/2003o.html#22 securID weakness
https://www.garlic.com/~lynn/2003o.html#29 Biometric cards will not stop identity fraud
https://www.garlic.com/~lynn/2003p.html#11 Order of Encryption and Authentication
https://www.garlic.com/~lynn/2004b.html#44 Foiling Replay Attacks
https://www.garlic.com/~lynn/2004h.html#13 Two-factor Authentication Options?
https://www.garlic.com/~lynn/2005e.html#42 xml-security vs. native security
https://www.garlic.com/~lynn/2005g.html#51 Security via hardware?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home