List of Archived Posts

2005 Newsgroup Postings (07/27 - 08/13)

Cluster computing drawbacks
Cluster computing drawbacks
Cluster computing drawbacks
Data communications over telegraph circuits
54 Processors?
Wildcard SSL Certificates
Cache coherency protocols: Write-update versus write-invalidate
54 Processors?
big endian vs. little endian, why?
Which certification authority to use
Code density and performance?
Code density and performance?
Code density and performance?
Why? (Was: US Military Dead during Iraq War
Why? (Was: US Military Dead during Iraq War
1.8b2 / 1.7.11 tab performance
Code density and performance?
Communications Computers - Data communications over telegraph
Code density and performance?
Code density and performance?
Why? (Was: US Military Dead during Iraq War
Code density and performance?
Code density and performance?
Code density and performance?
IBM's mini computers--lack thereof
Data communications over telegraph circuits
Data communications over telegraph circuits
Data communications over telegraph circuits
Data communications over telegraph circuits
Data communications over telegraph circuits
Data communications over telegraph circuits
Code density and performance?
Why? (Was: US Military Dead during Iraq War
X509 digital certificate for offline solution
Data communications over telegraph circuits
PART 3. Why it seems difficult to make an OOO VAX competitive
Code density and performance?
What was new&important in computer architecture 10 years ago ?
What was new&important in computer architecture 10 years ago ?
Uploading to Asimov
You might be a mainframer if... :-) V3.8
Moz 1.8 performance dramatically improved
Moz 1.8 performance dramatically improved
X509 digital certificate for offline solution
What was new&important in computer architecture 10 years ago ?
Anyone know whether VM/370 EDGAR is still available anywhere?
seamonkey default browser on fedora/kde?
Anyone know whether VM/370 EDGAR is still available anywhere?
Good System Architecture Sites?
X509 digital certificate for offline solution
APL, J or K?
IPSEC and user vs machine authentication
ARP routing

Cluster computing drawbacks

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cluster computing drawbacks
Newsgroups: comp.arch
Date: Wed, 27 Jul 2005 00:37:52 -0600
"Stephen Fuld" writes:
OK, that sounds similar to the old IBM loosely coupled scheme where multiple computers each had a channel to a disk controller. That is, there is a direct path from each CPU to a disk without going through another CPU. Contrast that with say a Beowulf cluster, or for that matter, any cluster of commodity PCs using an interconnect fabric of some sort. That is the distinction I was thinking about.

we had come up with geographic survivability when we were doing (non-mainframe) ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

however the mainframe culmination of my wife's Peer-Coupled Shared Data architecture (when she did her stint in POK in charge of mainframe loosely-coupled architecture) is current mainframe parallel sysplex
http://www-1.ibm.com/servers/eserver/zseries/pso/

and this is geographic dispersed parallel sysplex:
http://www-1.ibm.com/servers/eserver/zseries/gdps/

it mentions continuous availability ... when we were doing ha/cmp, we got asked to author part of the corporate continuous availability strategy document ... however, both pok and rochester complained ... that our geographic survivability statements couldn't be met by them (at the time).
https://www.garlic.com/~lynn/submain.html#available

note that FCS would provide both interprocessor and device connectivity using the same fabric. some of the upcoming disk assemblies will be able to run disk data transfers over ethernet (again both interprocessor and device connectivity using the same fabric).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Cluster computing drawbacks

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cluster computing drawbacks
Newsgroups: comp.arch
Date: Tue, 26 Jul 2005 16:36:10 -0600
"Stephen Fuld" writes:
Is loosly coupled essentially a cluster? I thought that there was a distinction in that loosly coupled meanst shared DASD (disk to non-IBMers) wheras a cluster (in today's parlance) typically meant totally independent systems but with some I/O type interconnect. But not direct access to a common disk pool without going through another CPU. But I may be wrong in my terminology.

in the 60s they were mostly in the same data center with connectivity to common i/o pool ... especially in availability configurations (and because dasd/disk price/bit was quite expensive).. later availability configurations over geographic distances became replicated/mirrored data.

in any case, some amount of the driving factors for common i/o pool was significant dasd/disk costs. at various points the disk business unit pulled in more revenue than the processor/memory business unit.

some of the 60s scenarios may have made common i/o pool for availability easier ... since 360 I/O channels had 200ft runs ... you could place processor clusters in the center and then have 200ft radius connectivity. this was increased with data streaming in the 70s to 400ft runs (allowing 400ft radius) .... although some larger installations found even this a limitation ... so there were some datacenters that spread in 3d over multiple floors/stories.

possibly the first SAN was at ncar. disk/dasd pool managed by ibm mainframe ... but also hyperchannel A515 adapters providing ibm channel emulation access to other processors (having connection in the hyperchannel environment). various processors in the complex (crays, other processors) would communicate to ibm mainframe (control channel). ibm mainframe would setup i/o transfer commands in the A515 ... and return a handle for the (a515) i/o commands to the requesting processer. The requesting client (cray supercomputer) would then invoke the A515 i/o commands for direct disk/dasd data transfer (using the same i/o interconnect layer for separate control with ibm mainframe and direct disk data transfer).

One of the reason for 3party transfer specification in HiPPI switch specification ... was to be able to emulate the ncar hyperchannel environemnt.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Cluster computing drawbacks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cluster computing drawbacks
Newsgroups: comp.arch
Date: Wed, 27 Jul 2005 01:04:36 -0600
glen herrmannsfeldt writes:
IBM at least used to be interested in scientific computing. They did build the 360/91 and the vector instructions for S/370.

But yes, the commercial side needs the high uptime.


well they have done those large clusters of things ... at least dating back to the announcement referenced in this posting
https://www.garlic.com/~lynn/95.html#13

here is reference to more recent activity
http://www.cbronline.com/article_news.asp?guid=948EF0BD-E854-4A6E-BC61-233280B893B3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Wed, 27 Jul 2005 09:54:23 -0600
hancock4 writes:
In that time frame (1950s) neither ASCII nor EBCDIC existed. IBM cards used Hollerith format. Also in that time frame much of the work was done by electro-mechanical machines, not electronics, so conversion wasn't that simple. A Baudot tape has control characters that determine the meaning of subsequent characters ('FIGS' and 'LTRS') and the converter needs to handle that; it's more than a one-to-one translation.

I believe the IBM transceivers could skip the paper tape step and send/receive cards directly over the line, with error checking.


some tab card drift ...

common practice in lots of shops (at least in 60s & 70s) was to punch sequence numbers in cols. 73-80 ... dropped & shuffled card decks could be put back in sequence by running the cards thru the sorter.

when i was an undergraduate, the student keypunch room also had a ???? sorter, a 407 reader/printer, and another box (5??, may have been collator or a 519?). i have vague recollection of somebody claiming boxes could be connected together to perform some useful function ... the following shows a 403 and a 514 with some connector running between them
http://www.columbia.edu/cu/computinghistory/reproducer.html

at one time, student registration and some other applications used tab cards that filled out with no2 pencil marks ... so it may have been 519 with optional pencil mark reader. howerever, there was some other set of tab equipment over in the admin bldg ... which included card printer ... i.e. i could get card deck punched from 2540 and get it sent over to admin bldg. to have holes in each card interpreted and the character printed across the top of the card.

the 407 was plug board programmable ... there was standard plugboard for 80x80 card deck listings ... you put the card deck into the 407 card hopper and it read and printed each card.

i remember playing around programming one of the extra 407 plug boards ... trying to perform other useful functions.

one of my first (paid) programming jobs was 360 application that read punch cards from student registration (possibly processed on 519? i don't remember) on a 2540 reader/punch into the middle output hopper. it did some analysis and if there was any problem ... it punched a card into the middle hopper behind the just read card. the registration cards were all plain manilla stock ... the cards in the input punch hopper had colored stripe across the top.

manual post processing of a registration card (held in trays containing 3000 some cards) could easily identify problem registration cards by the colored "marker" cards that could be seen when looking down at the top of the cards (in the drawer).

this describes another application using the 519 "mark sense" option:
http://home.comcast.net/~suptjud/IBMMachines.htm

my first actual paid programming job was doing a port of a 1401 MPIO application to 360/30. The 1401 MPIO program was unit record<->tape front end for the 709. batches of card decks (things like student fortran jobs) were read on the 1401 and written to tape, the tape was then carried over to the 709 for processing. The 709 output was written to tape which was carried to the 1401 and resulting cards punched and/or printer output produced.

i got to design and implement my own monitor; interrupt handlers, dispatcher, device drivers, storage allicaton, etc.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

54 Processors?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 54 Processors?
Newsgroups: bit.listserv.ibm-main,alt.folkore.computers
Date: Wed, 27 Jul 2005 10:40:25 -0600
doug@ibm-main.lst (Doug Fuerst) writes:
I think that the advent of MPP (massively parallel processing) and the ability of certain processing complexes to get into the tera-flop range using 256 or 512 parallel processors indicates that it must be possible. I suspect it may be the way that the OS is set up.

most of the massive parallel processing machines have high-speed interconnect for communication ... but don't implement consistent shared memory ... which became a lot harder with the advent of caches i.e. each processor having local cached image/values from real memory ... and now memory consistency involves coordinating all the caches.

one of the justifications for caches ... was that it reduced the signal latency between the processor and real storage for accessing values (by possibly at least an order of magnitude). trying to coordinate a large number of caches ... re-introduces all sorts of signaling latencies.

sci (and some number of other architectures from the late '80s and early 90s) went to somewhat more relaxed memory consistency model and also better structured the processing points when cross-cache coordination needed to occur ... involving the related signalling latencies. related post on sci and memory consistency
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?

note in the above reference ... sci defined a 64-port smp memory consistency model; convex built 128-processor smp by using two-processor boards that shared common L2-cache and each common L2-cache interfaced to an SCI memory port; sequent built a 256-processor smp by using four-processor boards that shared common L2-cache and each common L2-cache interfaced to SCI memory port (sequent was subsequently bought by ibm).

however, the OS multiprocessing operating system consistency and coordination model can also have a big effect on SMP processing overhead. For example there were some changes made for vm/sp (one of the later vm370 version) for 3081 and TPF environment that significantly increased the overall SMP related overhead. Common SMP shops saw possibly ten percent of total processing being lost to this increased SMP overhead.

The changes were targeted at improving the thruput of a single TPF (aka transaction processing facility, renamed airline control program) virtual machine guest running under vm370 on 3081 (tpf operating system at the time had no form of smp support), however it shipped as part of the base product ... so all customers running smp was subject to the overhead. Basically a lot of new cross-processing signaling was introduced into the vm370 kernel (and the associated signaling interrupt processing on the other processor).

normal virtual machine operation ... the guest operating system executes some privilege instruction interrupting into the vm370 kernel. vm370 does the complete emulation of the instruction and returns to virtual machine execution. the scenario was that tpf does a lot of SIOF ... the changes had the vm370 kernel on the processor where tpf was running doing the initial processing of SIOF ... and then "passing" the remaining processing of SIOF (ccw translation, actual i/o, etc) being passed to the vm370 kernel running on the other process ... while the first processor resumed tpf exeuction.

the significant increase in cross-processor kernel chatter had been generalized ... so that it not only affected the targeted TPF execution environment ... but all customers running vm370 smp support. in the dedicated tpf environment the 2nd processor was typically idle ... so while the change increased the overall kernel smp overhead ... it was more then offset by being able to offload some of the tpf related processing to the idle processor.

collected smp postings:
https://www.garlic.com/~lynn/subtopic.html#smp

misc. past postings mentioning spefically 3081 and tpf
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
https://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
https://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005h.html#22 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005.html#22 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#17 Performance and Capacity Planning

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Wildcard SSL Certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wildcard SSL Certificates
Newsgroups: comp.security.misc
Date: Wed, 27 Jul 2005 13:07:32 -0600
"Michael" writes:
I have a question. I'm thinking about purchasing a wildcard SSL certificate for my domain. I've been told that the wildcard SSL certificate can be a security risk, but I've never heard an explanation as to why this is so. I've done searches on Google and I have yet to hear a real reason as to why these might not be safe.

Are these truly not safe? What situations should I be aware of before I decide to implement this?


basically any processor that can obtain the corresponding private key ... and can represent itself as some/any host in the specified domain ... clients will then consider as valid hosts for the associated domain.

some sites have used wildcard certificates where they have multiple different hosts providing similar service (for scallability and/or availability) ... where each host may have unique host name (within the domain).

indirectly some might consider it a security issue because it implies replication of the private key.

in the early days of processing load-balancing there was

1) use of dns multiple a-records .... where the same hostname was mapped to a list of ip-addresses. the browser could run thru the list of different ip-addresses until it found a host that responded. this would require the host to have its own copy of the corresponding private key ... but wildcard certificates wouldn't be required since all the different hosts (with different ip address) would all be responding to the same hostname.

2) use of front-end redirection in server boundary router (interfacing to a pool of servers). the client would map a single hostname to a specific ip address ... the initial connect packet would pass thru the boundary router ... which had some special code to redirect the connect request to a pool of servers. agaiu, a non-wildcard certificate wouldn't be needed ... but each server in the pool would require their own copy of the private key.

some past posts on ssl domain name server certificate
https://www.garlic.com/~lynn/subpubkey.html#sslcert

has the browser checking that the server (the browser is talking to) is able to demonstrate possesion of the private key that goes with the public key in the certificate ... and that the hostname specified to the browser (in the url) corresponds to the hostname in the ssl certificate (supplied by the server). for wildcard certificates ... the browser just checks for match on the non-wildcard portion against the corresponding portion in the supplied URL.

now, an attacker that could convince a certification authority to issue a wildcard certificate against a base tld qualifier, like ".com" ... then could impersonate anybody in .com. This isn't a vulnerability of having correctly issued you a wildcard certificate ... this is a vulnerability of a certification authority incorrectly issuing a wildcard certificate to an attacker.

say somebody else in your company is issued a wildcard certificate for their server ... and it happens to have very low security requirement and poor intrusion countermeasures ... and could be relatively easily compromised. it then could made to impersonate other servers in the same company. Some of the other corporate servers might have much higher security requirements and therefor much stronger intrusion countermeausres (making them much more difficult to directly compromise)... this is analogous to, but different to some of the escalation of privilege attacks.

the attack isn't directly against your wildcard certificate ... it is using wildcard certificate from a less well defended server ... to turn around and impersonate servers that are much better defended.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Cache coherency protocols: Write-update versus write-invalidate

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cache coherency protocols: Write-update versus write-invalidate
Newsgroups: comp.arch
Date: Thu, 28 Jul 2005 10:17:17 -0600
Del Cecchi writes:
Have you looked at DASH and SCI? Those are the two oldest that I am aware of.

sci defined 64 "ports" for its shared memory bus implemenation.

convex used two (hp risc) processor boards with sci for 128processor exemplar.

both dg and sequent used four (intel) processor boards with sci for 256processor systems (sequent was subsequently bought by ibm)

dash was research program at stanford .. the dash project web pages at stanford have gone 404 (but search engines turn up a lot of dash references)

and replaced by the flash follow-on
http://www-flash.stanford.edu/

sci was standards effort pushed by slac
http://www.scizzl.com/

there was some concurrent competition between sci and fcs at the time ... both were using pairs of uni-directional serial fiber optic (simulating full-dupliex) ... and both were doing definitions for taking synchronous scsi bus commands and encapsulating as asynchronous messages.

and fcs was standards effort pushed by llnl

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

54 Processors?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 54 Processors?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 28 Jul 2005 10:39:58 -0600
lists@ibm-main.lst (Phil Smith III) writes:
I remember when ISF came out for VM (a sort of Sysplex-type thing, allowing multiple physical processors to look like a single image): it was designed to support some large number of connections, but at the last minute, IBM chickened out and capped it at 2. So somebody did a global change on the doc, and it wound up saying you could connect "up to two systems".

Been a running joke (for some of us) ever since ...


my wife did her stint in pok in charge of loosely-coupled architecture, where she came up with Peer-Coupled Shared Data architecture (ims hot standby used some of it ... but you really didn't see it until parallel sysplex)
https://www.garlic.com/~lynn/submain.html#shareddata

she also fought some battles over trotter/3088 channel-to-channel product (was sort of a bus&tag ctca switch with eight channel interfaces) for some advanced features ... that she lost.

sjr had a number of vm370 based projects ... including the original relational database & sql work was done on vm370 platform
https://www.garlic.com/~lynn/submain.html#systemr

which finally resulted in some tech. transfer to endicott and eventually released as sql/ds. there was a subsequent transfer of sql/ds from endicott back to stl resulting in (mainframe) db2.

in the early 80s, sjr also eventually got a trotter/3088 and did some hardware enhancements ... along the lines of my wife's original objectives. they put together an eight-way vm cluster using peer-to-peer protocols. things like going from zero to full cluster consistency was a subsecond operation.

attempting to get the support into the product path ... they were told that they would have to move the implementation from an underlying peer-to-peer protocol ... to a half-duplex lU6.2 based protocol. The LU6.2 implementation (running on the exact same hardware) resulted in the subsecond consistency operation increasing to something like 30 seconds elapsed time (because of the enormous latencies that were introduced when going from a full peer-to-peer operation to LU6.2 based infrastructure).

slightly related postings when my wife worked on fully peer-to-peer networking architecture specification (but ran into huge amount of pushback from the SNA forces):
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5

we later did the ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp

during the ha/cmp days ... i was asked to author part of the corporate continuous availability strategy document; unfortunately both POK and Rochester complained that at the time, they couldn't meet the requirements and much of that section was dropped. in this time frame, we had coined the terms disaster survivability and geographic survivability to distinguish from straight disaster recovery
https://www.garlic.com/~lynn/submain.html#available

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

big endian vs. little endian, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: big endian vs. little endian, why?
Newsgroups: alt.folklore.computers
Date: Thu, 28 Jul 2005 13:02:02 -0600
over a year ago, i bought a dell dimension 8300 with multithreaded 3.4ghz processor, 4gbytes of memory, pair of 256gbyte sata drives with sata raid controller.

i've observed my first personal computer was a 360/30 ... the univ. would shutdown the machine room over the weekend and i got the whole place to myself from 8am sat. until 8am monday (it sometimes made it a little difficult going to monday classes after being up for 48hrs straight).

in anycase, the 360/30 personal computer had 64kbytes of memory ... 2**16 ... and nearly 40 years later ... i have a personal computer with 4gbytes memory ... 2**32 ... an increase of 2**16 in memory size over a period of nearly 40 years.
https://www.garlic.com/~lynn/2005b.html#18 CAS and LL/SC
https://www.garlic.com/~lynn/2005h.html#35 Systems Programming for 8 Year-olds

i've periodically complained about the machine getting sluggish with mozilla tabs ... i have a default tab bookmark folder that i regularly use ... click on it and it will fetch 130 some-odd urls simultaneously ... then i start selecting other URLs (into new tabs) from the original 130 (and deleting as i go along). somewhere around 250 open tabs ... things start to really bog down.
https://www.garlic.com/~lynn/2004e.html#11 Gobble, gobble, gobble: 1.7 RC1 is a "turkey"!
https://www.garlic.com/~lynn/2004e.html#54 Is there a way to configure your web browser to use multiple

nominally opening a new URL in a new background tab ... should go on asynchronously to what you are doing in the current tab. around 250-300 tabs ... the browser just about locks up while fetching the new URL (and you may also start getting browser popup messages about things getting sluggish ... and you have to click on the popups to clear them, which is really annoying).

it isn't starting to page ... real storage in-use will have maybe 500-600mbytes in use by the browser ... with a couple gbytes still free.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Which certification authority to use

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Which certification authority to use
Newsgroups: comp.security.misc,microsoft.public.certification,alt.computer.security
Date: Thu, 28 Jul 2005 14:50:35 -0600
Maik Wiege <mswiege*nospam*@gmx.de> writes:
Is there a table or some kind of overview of the certification authorities that are preinstalled in all browsers. There are so many available and of course I want to sign my certificate by an ca that is preinstalled in many browser as possible, otherwise I could sign it by myself. :-) And I could that way do some price comparement.

here is old look we did some time ago on the subject:
https://www.garlic.com/~lynn/aepay4.htm#comcert14 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay4.htm#comcert16 Merchant Comfort Certificates

we were asked to consult with this small client/server startup in silicon valley on doing payment transactions and something called a payment gateway. in the year we worked with them, they moved from to mountain view and changed their name from mosaic to netscape ... and work is frequently now called e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

as part of the effort, we had to go around and sort of audit several of the major organizations issuing these things called ssl domain name certificates.
https://www.garlic.com/~lynn/subpubkey.html#sslcert

the basic technology is that public keys are filed in trusted public key repositories. in infrastructures like pgp ... this frequently is done by individuals with respect to other individuals they know.

in the case of the SSL domain name certificates ... certification authority root public keys were pre-installed into trusted public key repository built into the browser software before it was distributed.

these certification authority root public keys can be used for directly signing customer digital certificates .... or, in some cases, they may be used for signing other organization digital certificates containing their own unique public keys.

in a standard PKI trust hierarchy ... the root public key may be used for signing subsidiary certificates containing subsidiary public keys ... and then the subsidiary public keys are used for directly signing general digital certifictaes.

as a result ... you may find a ca that has a root public key pre-installed in large number of different browsers ... but it is one of the organization's subsidiary public keys that might be signing your specific digital certificate.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch,comp.lang.fortran
Date: Fri, 29 Jul 2005 09:42:45 -0600
"John Mashey" writes:
Speaking of 1988: "Nonetheless, senior managers and engineers saw trouble ahead. Workstations had displaced VAX VMS from its original technical market. Networks of personal computers were replacing timesharing. Application investment was moving to standard, high-volume computers. Microprocessors had surpassed the performance of traditional mid-range computers and were closing in on mainframes. And advances in RISC technology threatened to aggravate all of these trends. Accordingly, the Executive Committee asked Engineering to develop a long-term strategy for keeping Digital's systems competitive. Engineering convened a task force to study the problem."

vax shipments 78-87, sliced and diced by year, model, us, non-us.
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

shift from traditional vax to microvax (and similar small computers) was occurring by '85.

(ibm) 4341 was in the same time time as 11/780 and sold into similar market and there were similar effects. when the 4341 came available ... there was tremendous explosion in sales ... with lots of large corporations snapping them up in orders of several hundred at a time. the 4381 was the 4341 follow-on ... and never saw anything like the 4341 uptake ... since that market was already starting to shift to PCs and workstations. minor ref:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch,comp.lang.fortran
Date: Fri, 29 Jul 2005 10:11:18 -0600
Anne & Lynn Wheeler writes:
(ibm) 4341 was in the same time time as 11/780 and sold into similar market and there were similar effects. when the 4341 came available ... there was tremendous explosion in sales ... with lots of large corporations snapping them up in orders of several hundred at a time. the 4381 was the 4341 follow-on ... and never saw anything like the 4341 uptake ... since that market was already starting to shift to PCs and workstations. minor ref:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers


for some topic drift over into clusters ... the referenced post has some comment about competiveness of six 4341s in small cluster vis-a-vis a large (mainframe) 3033

from the above referenced post:
The other issue was that very small percentage of the 4341s were installed with (POK) MVS. The combination of non-MVS and serious 303x competition resulted in some interesting internal politics (the SHARE user group has long litany of internal politics obfuscating the ability to market and sell VM as well as VM/4341s whether into traditional data center operations or into the emerging departmental server market). One of the stranger internal antics was at one point, POK managed to cut the chip allocation for critical 4341 component (from internal fab) in half (as a defensive 303x marketing operation). Various SHARE studies highlighted that the 11,000 plus VAX sales (which should have been 4341s) were as much the result of various internal corporate politics (than anything DEC might have done).

... snip ...

cluster related post in recent thread on the latest mainframe announcements earlier this week
https://www.garlic.com/~lynn/2005n.html#7 54 Processors?

the thread started out w/question on memory/cache consistency overhead and how were they able to get to 54processor smp ... other posts in the thread
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005n.html#4 54 Processors?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch,comp.lang.fortran
Date: Fri, 29 Jul 2005 11:03:46 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Hmm. Why should they have been 4341s? But, other than that, I remember that and can witness that it was fairly common.

ref:
https://www.garlic.com/~lynn/2005n.html#10 Code density and performance
https://www.garlic.com/~lynn/2005n.html#11 Code density and performance

the original ecps ... micro-code performance assist for virtualization (about half-way between straight software and current LPARS)
https://www.garlic.com/~lynn/submain.html#mcode

was project originally for the 148. there was a project that was targeted at packaging the rest of the software as part of the machine ... attempting to eliminate as much as possible customers needed trained personnel to support the infrastructure. this got vetoed somewhere up in corporate.

4341 was follow-on to 148 ... and was being snapped up by the same market that were buying vax. again there was an attempt to integrate as much as possible of the software with the machine operation ... substantially lowering the time & skill levels required by customers to support such installations (in part because of the big explosion in the number of installlations had increased demand larger than skills available). again it was vetoed.

while the claimed price/performance for 4341 was better than vax, numerous SHARE studies showed that many customers were picking vax over 4341 because of vax simpler install and support (lower/less skills needed).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why? (Was: US Military Dead during Iraq War

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why? (Was: US Military Dead during Iraq War
Newsgroups: alt.folklore.computers
Date: Sat, 30 Jul 2005 08:51:36 -0600
forbin@dev.nul (Colonel Forbin) writes:
No, it's about winning or losing at *business*. CCC lost, for instance. Seymour sort of forgot that a successful business generally has to sell something. The trade deficit of the US should tell our government something, but they don't listen.

If your business loses money consistently, it isn't going to survive, and nobody is going to benefit.

Welch didn't advocate simply firing the bottom 10% of the workforce. His idea of differentiation has more to do with resource allocation.

Indeed, if you read Welch's book, he advocates paying the most attention to the "middle class" of the workforce because those people will determine whether your company succeeds or fails.

You simply can't spend astronomical amounts as a business to "develop" the bottom 10% of your workforce. If you actually read Welch's book, he promotes the notion that all people are valuable, but they may not be able to contribute in a given company. I don't think that's a particularly controversial statement.


i once had a manager that claimed that the top 10percent of the employees tended to provide 90percent of the productivity ... but managers tended to spend 90percent of their time on the bottom 10percent of the employees.

his observation was that if managers could instead spend the majority of their time eliminating obstacles for the top 10percent and improving their productivity (maybe doubling) ... it would be the most valuable thing that they could possibly do.

and always for some drift ... boyd references
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

and a recent book: Certain to Win, The Strategy of John Boyd, Applied to Business

http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why? (Was: US Military Dead during Iraq War

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why? (Was: US Military Dead during Iraq War
Newsgroups: alt.folklore.computers
Date: Sat, 30 Jul 2005 12:39:35 -0600
"Lars Poulsen (impulse news)" writes:
To me, true "empowerment" means to create an environment where frank naming of issues is encouraged and conflict avoidance and brownnosing are eradicated. Along with looking for what it is that is holding back the team (lack of specific equipment or tools; conflicting requirements pushed by rivaling groups in the organization representing "customers") and finding ways to take the roadblocks away so that the team members can apply their energy to doing the job instead of fighting "the system" or each other.

Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

had this talk called Patterns of Conflict ... which was how to deal in a competitive situation ... even applied to business ... but most of the competitive historical examples were taken from warfare.

he then started developing another talk called Organic Design for Command and Control .... towards the end he gets around to saying what he really means is "leadership and appreciation" (rather than command and control).

he had one story (from when he was head of lightweight fighter plane design at the pentagon) where the 1-star (he reported to) called a large meeting in the auditoruim and fired him (for something about running a disorderly organization after observing load disagreements between members of boyd's organization attempting to thrash out technical details ... even 2LTs arguing with Boyd about technology). a very short time later a 4-star called a meeting in the same auditorium with all the same people and rehired Boyd ... and then turned to the 1-star and told him to never do that again.

my wife has just started a set of books that had been awarded her father at west point ... they are from a series of univ. history lectures from the (18)70/80s (and the books have some inscription about being awarded to her father for some excellence by the colonial daughters of the 17th century).

part of the series covers the religous extremists that colonized new england and that the people finally got sick of the extreme stuff that the clerics and leaders were responsible for and eventually migrated to more moderation. it reads similar to some of lawarence's description of religious extremism in the seven pillars of wisdom. there is also some thread that notes that w/o the democratic influence of virginia and some of the other moderate colonies ... the extreme views of new england would have resulted in a different country.

somewhat related is a story that my wife had from one her uncles several years ago. salem had sent out form letters to descendants of the town's inhabitants asking for contributions for a memorial. the uncle wrote back saying that since their family had provided the entertainment at the original event ... that he felt that their family had already contributed sufficiently.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1.8b2 / 1.7.11 tab performance

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: 1.8b2 / 1.7.11 tab performance
Newsgroups: netscape.public.mozilla.general
Date: Mon, 01 Aug 2005 09:09:10 -0600
i have a mozilla 1.8b2 from june that i use with 200+ open tabs. normally opening a new tab in the background, things are still responsive in the current tab (scrolling forward/backward, etc)

starting around 250 open tabs, things start getting sluggish, opening a single new tab in the background, the current tab locks up for several seconds. also i'll sporadically get a popup that stops things (until i click on it) saying something about slow-system or stopped script.

running 1.7.11 with the same profile ... i start seeing the slow system popup at maybe 130 open tabs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch,comp.lang.fortran,alt.folklore.computers
Date: Mon, 01 Aug 2005 12:34:04 -0600
pg_nh@0506.exp.sabi.co.UK (Peter Grandi) writes:
Let's put it another way: a tech company survives only in the long term if they do regular business model transitions.

However business model transitions are difficult and expensive, and well established companies have a lot of managers who are vested in the old business model, and are close enough to retirement, and many fewer entrenched managers who are going to benefit from the new business model.


put it another way ... new technology can change the business ....

there was big explosion in sales of 4341 and vax machines in the late 70s thru the early 80s. they appeared to cross some price/performance threashold and you saw large companies ordering 4341s with orders that involved hundreds at a time. this put a severe stress on the available IT skills (in addition to having several large machines in a single datacenter supported by large tens of people ... you now saw hundreds or thousands of such machines spread all over the enterprise).

the 4381 follow-on to the 4341 didn't have the equivalent market uptake ... since that market segment was already starting the transition to workstations and large PCs.

technology innovation/change frequently results in business innovation/change (fundamental changes in the environment have a way of resulting in wide ranging changes).

of course this brings in the topic of OODA-loops and agile business
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

earlier posts in the thread:
https://www.garlic.com/~lynn/2005n.html#10 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#11 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#12 Code density and performance?

the 148 and 4341 class machines also contributed to the large increase in internal corporate network nodes in the late 70s and early 80s.

the arpanet/internet was about 250 nodes on 1/1/83 when it converted to internetworking protocol (with gateways)
https://www.garlic.com/~lynn/internet.htm#0
https://www.garlic.com/~lynn/subnetwork.html#internet

i've asserted that the internal corporpate network was larger than the arpanet/internet from just about the start because it had a kind of gateway function built into every node from just about the beginning. not too long after the arpanet/internet converted to internetworking protocol (and got gateway functionality) ... the internal corporate network passed 1000 nodes:
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/subnetwork.html#internalnet

the number of nodes in the internet finally pass that in the internal corporate network sometime mid-85. part of the reason was the gateway functionality that the internet got with converting to internetworking protocol on 1/1/83. The other part was that some large number of the internet nodes were becoming workstations and large PCs ... while there were a large number of internal corporate forces restricting such technology to terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

another contributing factor to the slower internet growth in the 83-85 time-frame was that some significant number of educational nodes were on bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

while bitnet/earn used technology similar to the internal corporate network, the nodes didn't show up in either the internet counts (at least not initially) or the internal corporate network counts.

some csnet, bitnet, & internet histories (from isoc):
http://www.isoc.org/internet/history/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Communications Computers - Data communications over telegraph

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Communications Computers - Data communications over telegraph
circuits
Newsgroups: alt.folklore.computers
Date: Mon, 01 Aug 2005 15:56:09 -0600
haynes@alumni.uark.edu (Jim Haynes) writes:
In the previous message I neglected to mention Collins Radio, which had one of the early computers designed for message switching. Collins did a lot of business with the airlines. They had some visions of getting into more general data processing aimed at the airline market; but the cost of getting up to speed was ruinous.

san jose plant site had t3/microwave collins digital radio between the plant site (roof of bldg. 12) to repeater towers, one south of the plant site (above stl/bldg. 90) and one above the san jose dump going to the old bldg. 29 (now torn down and turned into housing project).

when the hiway 85 section was first built ... somebody claimed that it was triggering c-band radar detectors (on the section that was line-of-site between the plant site and the repeater tower above stl/bldg. 90) ... just past cottle exit before getting to 101.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Tue, 02 Aug 2005 08:11:58 -0600
pg_nh@0506.exp.sabi.co.UK (Peter Grandi) writes:
Indeed, because the long-forgotten statistics I had mentioned show that programs that have not been specifically locality improved tend to have ''hotspots'' of about 100-200 bytes long... 512 bytes is sort of a decent compromise between too small and too large for that sort of program.

one of the studies at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

in the early 70s was doing full I & D traces and attempting to re-org programs for better virtual memory characteristics. the application was released as vs/repack product in the mid-70s. besides looking at instruction hotspots and repacking programs ... it was also used to study storage utilization patterns. it was used as part of analysing and redesigning the apl\360 garbage collection scheme in the port to cms\apl (basically from swapping whole real storage workspaces ... to large virtual memory workspaces).

as an aside ... the science center had a number of different performance analysis related activities; besides vs/repack ... there was kernel instrumentation and 7x24 activity recoding, hot-spot investigation, multiple regression analysis, analytical modeling, etc. one of the analytical modeling tuols evolved into the performance predictor on the world-wide sales and marketing system
https://www.garlic.com/~lynn/subtopic.html#hone
where sales & marketing people could input some customer details and ask performance & configuration what-if questions. the extensive performance monitoring laid the groundwork and evolved into capacity planning
https://www.garlic.com/~lynn/submain.html#bench

360/67 had 4k pages, 1mbyte segments, 24bit & 32bit addressing.

370 was initially announced with only real storage.

when virtual memory was announced for 370 ... virtual memory support 2k & 4k pages, 64k & 1mbyte segments and 24bit addressing.

operating systems targeted for smaller real memory 370 configurations implemented 2k pages ... while the operating system targeted for larger real memory 370 configuration implemented 4k pages.

the morph from cp67 to vm370 had vm370 using native 4k pages for virtual machine emulation .... although it had to support both 2k page and 4k page operation for shadow pages to correspond to what ever the virtual machine was doing.

vs/1 was one of the 2k page operating systems ... gaining better packing in small real storage configurations. however, starting in the mid-70s ... standard real storage configurations were getting much larger.

part of ecps (virtual machine microcode performance assist) on 138/148 (and later 4341) ...
https://www.garlic.com/~lynn/submain.html#mcode

it was possible to tune vs/1 running in a virtual machine to run faster than vs/1 directly on the real hardware. part of it was configuring vs/1 such that rather than vs/1 doing its own paging in 2k units ... it effectively relied on vm370 to do paging in 4k units. effectively any better packing based on using the smaller page size was more than offset by the overhead of the smaller disk transfer sizes. as an aside, some amount of the hotspot analysis was used for selecting 6kbytes of kernel code for migration into ecps microcode.

the smaller page sizes and improved packing is targeted at optimizing real storage usage as a scarce resource. as machines were getting faster and larger ... system bottlenecks were shifting from real storage constrained to disk transfer constrained (file i/o, paging, all kinds of disk transfers).

on the larger real storage configurations any loss in real storage optimization going from 2k to 4k pages ... was more than offset by improvement in disk optimization (especially since major system bottlenecks were shifting from real storage limited to disk limited).

this was further highlighted in the early 80s with the implementation of big pages. the cpu hardware was still 4k pages ... however the page transfer management collected ten 4k pages (into a big page) at a time for page-out. any subsequent (4k) page fault would fetch all ten 4k pages (of the related big page) in one transfer. note that this was slightly better than straight 40k pages ... since the aggregation of ten 4k pages into a big page was based on reference locality and not contiguous 40k virtual memory.

some past big page postings
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?

early in this period ... i had started asserting that the relative system disk thruput had declined by a factor of ten times over a period of years. the disk division assigned the disk performance group to refute the claims. after a couple weeks they came back and said that i had slightly understated the problem. some past posts on the subject:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning

misc. past posts mentioning vs/repack
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Tue, 02 Aug 2005 08:56:26 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes. This is what IBM did with MVS, after a brief and unsatisfactory experience with traditional paging. Paging was effectively used just to adjust the working set by dropping unused pages, and tasks were swapped in (almost) their entirety.

My current proposal is more radical, and is to drop the paging requirement altogether, so that TLB misses could be abolished. One doesn't get much architectural simplification by reducing them, but one gets a lot by abolishing them! Naturally, this would need some changes to the system and linker design.


MVS had two separate issues .... one was the disk transfer bottleneck issue ... which was addressed by *big pages* ... see previous post (with numerous big page references)
https://www.garlic.com/~lynn/2005n.html#17 Code density and performance

it was referred to as swapping ... to distinguish it from 4k-at-a-time paging .... but it was logically demand paging ... but in 40k blocks ... and the 40k blocks were not necessarily contiguous virtual memory, having been composed from referenced virtual pages during previous execution period.

the other issue ... was that their page replacement algorithm had numerous difficiences ... and their page i/o pathlength was something like 20 times that of vm370 .... so their page-at-a-time was decidedly inefficient.

a simple example ... was in the period when virtual memory support was being added to mvt (for os/vs2, svs ... precursor to mvs) ... was that i did some number of visits to pok to talk about virtual memory and page replacment algorithms. they had done some simulation had decided (despite strong objections to the contrary) ... that it was more efficient to select non-changed page for replacement (before selecting a change page for replacement) ... since they could avoid writing a non-changed page out to disk (since the previous copy was still good). this had a strong bias to replacing code pages before data pages. it turned out that it also had a strong bias to replacing shared, high-use library code pages before replacing private, lower-use data pages. this implementation extended well into the mvs time-frame ... finally getting redone about the time of the changes for big-pages. in any case, the various inefficiencies in the native mvs paging impelementation (at least prior to the introduction of big page support) drastically degraded mvs thruput anytime it started doing any substantial paging at all.

one of the other characteristics of big-pages ... was that it eliminated the concept of home location on disk for a virtual page. when a big page was fetched into memory ... the related disk location was immediately discarded. when components of big page were selected for replacement ... they always had to be (re)written back to disk (whether they had been changed during the most recent stay in storage or not). the whole process was to try and optimize disk arm usage/motion. it had sort of a moving cursor algorithm with current active location/locality for the disk arm. recommendations for available space for big-page allocation was typically ten times larger than expected usage. this tended to make the area in front of the moving cursor nearly empty ... so new writes would require the minimum arm motion ... and the densest allocation of pages (that might page fault and require fetching) was just behind the current moving cursor.

big pages tended to increase the number of bytes transferred ... more writes since there was no conservation of non-changed replaced pages using their previous disk location ... and fetches always brought the full 40k in one blast ... even if a 4k-at-a-time strategy might have only brought 24k-32k bytes. the issue was that it traded off real storage utilization optimization and bytes-transferred optimization for disk arm access optimization (drastically decreasing the number of arm accesses per page transferred).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why? (Was: US Military Dead during Iraq War

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why? (Was: US Military Dead during Iraq War
Newsgroups: alt.folklore.computers
Date: Tue, 02 Aug 2005 10:43:40 -0600
jmfbahciv writes:
This bothers the shit out of me. Do you have the option of not doing that cataloging? And is "don't do that" a a default?

part of the issue is that there is a large amount of unix filesystem metadata that is normally cached in processor memory ... and on disk, is spread all over the place. the standard unix metadata has no consistent way to transition from one metadata set of changes to a new metadata set of changes.

basically jfs was done for aix in the late 80s and released with aixv3 for rios/rs6000.

all the filesystem metadata (allocation, inode, etc) information was tracked as changes ... with very specific commit points.

it basically is the same as database transactions ... but rather than all data ... it is just the filesystem structure/metadata.

the commit points made sure that what was on disk stayed in consistent state (either complete set of changes occurred or no changes occurred). standard unix fielsystem metadata is subject to very lazy writes ... and easily could be done in inconsistent order and subject to all sorts of vulnerabilities depending on where failures might occur. part of this was that a consistent change to filesystem metadata could involve a set of multiple disk records that were widely distributed on the disk. determining if the metadata was in an inconsistent state after restart was somewhat ambiquous and extremely time-consuming.

the metadata journal/log was a sequential set of records ... that were treated as either all having been written or a partial writing of a consistent set would be ignored. on restart ... active entries in the log could represent filesystem metadata that might have only been partially written to disk. sequentially reading active log metadata and writing it to the appropriate place was significantly less time-consuming than examining all filesystem metadata information attempting to find inconsistencies (possibly a couple megabytes of log information vis-a-vis hundreds of megabytes of filesystem information).

relying on the sequential log writes for recovery consistency could also be used to relax some of the standard filesystem metadata updating processes (trading off sequential writes for some amount of random writes).

now there was a side issue with jfs. 801 had defined database memory that was implemented in rios and used for the jfs implementation.
https://www.garlic.com/~lynn/subtopic.html#801

basically 128byte virtual storage lines could be tracked for changes ... and could be identified. for jfs, unix filesystem metadata was slightly restructured so that it occupied span of virtual memory that was flagged as database memory. when a commit operation occurred, the section of database memory was scanned ... looking for all the 128-byte memory sections that had been modified. all of these modified 128-byte memory sections were then written to the journal/log. supposedly this avoided having to update all the filesystem code to put explicit logging calls ... and also was supposedly more efficient than having explicit logging calls.

the aos group in palo alto (that had done the bsd port to the romp-pc/rt) was looking at doing a 386/486 port that also had support for journal filesystem. the issue was that only the rios platform had support for database memory .... doing an implementation on 386/486 required going thru the whole filesystem and inserting explicit calls to logging whenever there was metadata changes. the problem that this caused between palo alto and austin ... was that the explicit logging calls turned out to be faster than the rios database memory implementation.

the aox group in palo alto was already in disfavor with austin.

originally the austin group was doing a displaywriter follow-on for the office products division using an 801/romp. when the displaywriter follow-on project was killed ... it was decided to retarget the product to the unix workstation market. something called the virtual resource manager (vrm, sort of an abstract virtual machine) was defined and implemented in pl.8 (leveraging the pl.8 programming skills of the internal group). the outside company that had done the pc/ix port to the ibm/pc was hired to do a port to the VRM interface on the theory that it would take less effort to do a VRM interface implementation than a bare hardware implementation. this became aix & the pc/rt

by most accounts, the port to the VRM interface took substantially more resources (than standard unix hardware port). this was further reinforced when the palo alto group did a bsd port to the bare romp-pc/rt hardware with less resources than it took to do either the VRM implementation or the unix port to the VRM implementation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Tue, 02 Aug 2005 12:56:01 -0600
"Eric P." writes:
The preferential tossing of shared code being an undesirable side effect, right? If not then why?

... snip ...
I remember hearing about a VM/CMS feature whereby it would coagulate the disparate page file pages of long running process in the background to improve performance. I always thought that was a nice idea.

This avoids the infamous VMS system console message: "Page file badly fragmented, continuing with degraded performance." on systems that had been up for a long time with processes locking down page file pages.


the problem was that they were giving higher precidence to change/non-changed state than to reference state ... which, in effect violated a fundamental principle of LRU replacement strategy.

as a result high-usage non-changed pages ... were being replaced before lower-usage changed pages. an important set of high-usage non-changed pages were high-use, shared library executables. a large set of lower-usage changed pages were private data pages. this tended to cause a significant increase in overall paging ... because selecting high-use page for replacement before lower-use pages ... would tend to result in immediate page faults to bring the high-use page back into memory.

there was vm/cms page migration stuff (which tended to coalesce pages in a virtual address space) that was part of the resource manager that i shipped in the mid-70s:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#bench

lots of page replacement, page allocation, device management and resource scheduling algorithms. however, the majority of the code turned out to be kernel restructuring for integrity and smp support. another part of the resource manager was attempting to better identify long-running, non-interactive execution and better optimize its execution scheduling burst characteristics ... aka background tasks that executed sporadicly would see a higher probability of seeing their pages replaced ... trying to make background task execution more bursty would increase the probability of retaining critical pages during the execution bursts.

cp67 picked up and shipped a lot of optimization and scheduling changes that i was doing as an undergraduate in the 60s. in the morph of cp67 into vm370, a lot of those changes were dropped. there were various resolutions from the mainframe user group share
http://www.share.org/

to put the wheeler scheduler (back) into vm370.

the big page large space allocation wasn't as much a fragmentation issue ... it was much more dynamic arm access optimization ... the most recently written pages would be mostly in a nearby clump on the trailing edge of the *cursor* ... and the forward edge of the cursor (where new writes were being performed) would be nearly empty. this significantly increased disk arm locality (decreased arm motion distance, aka new writes could be performed with the minimum arm motion .... and most of the out-of-position arm movement to retrieve previously written pages would tend to be minimum arm travel in the region trailing the cursor).

note that not too long after mvs big pages were implemented ... there was a similar big page implementation for vm370 (early 80s).

one of the other things that happened as part of putting the wheeler scheduler into vm370 ... was the resource manager got picked as the guinea pig for charging for kernel software.

as part of the 6/23/1969 unbundling announcement (large part prompted by gov. litagation), they started charging for application software, but kernel software was still free.

in part because of the appearance of clone 370 mainframes ... they started looking at pricing separately for kernel software. the policy established with the resource manager was that kernel software ... not directly related to hardware support (processor, device drivers, etc) could be charged for.

this then created a problem when it was decided to ship smp support in the product. a large part of the kernel restructuring for smp support had already been shipped as part of the priced resource manager. the kernel pricing policy would preclude having free smp kernel support that had as a pre-requisite a priced component. the resolution was that when they shipped smp support ... a huge chunk of the code that was part of the resource manager was ripped out and moved to the "free" kernel.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Tue, 02 Aug 2005 13:29:47 -0600
big page recent posts
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?

the big page strategy of trying to maintain the arm position in an area of empty space for new consecutive writes ... is similar to some of the subsequent work on log-structured filesystems.

while log-structured filesystems tended to have good arm locality for writes ... they sporadically had to do garbage collection to coalesce file records into consecutive locations (and recreate large expanse of empty space).

big page strategy avoided the garbage collection overhead by 1) trying to have ten times as much available space as would ever be allocated and 2) big pages that were subsequently fetched into memory had their backing disk location deallocated ... effectively evaporating off the surface of the disk. any long lived big pages on disk would tend to be randomly sparcely allocated across the whole surface ... but long lived also implies that it hasn't been re-used (or otherwise it would have evaporated).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Tue, 02 Aug 2005 15:43:40 -0600
"Eric P." writes:
Ok now I follow. I understood _what_ it was doing but thought you meant you changed it _to_ work like that (which is why I was inquiring further as to the rational for that approach). Rather it was you who had the "strong objections to the contrary".

Yes, in the extreme case it could wind up with all memory just modified pages.

An alternate idea might have been to take the list head frame, remove just one, or a few, oldest references (using the back link), and recycle it to the back of the global list if references remain. That would tend to keep shared pages resident at the expense of private pages.


no, i was saying that as an undergraduate in the '60s, i implemented global LRU ... based purely on reference patterns.

in the 70s ... when some people in POK were doing the initial work on adding virtual memory to MVT (initially for os/vs2 svs ... but carried over into MVS) ... they came up with this idea that it required less work (and less latency) to replace non-changed pages than it required to replace change pages. i tried to talk them out of it ... partially based on it violated fundamental principles of least recently used replacement policy.

now somebody did try and do effectively the reverse for "shared" pages ... attempting to keep virtual pages that were shared between multiple address spaces ... in storage ... again independent of their actual reference usage. the result was that low-usage pages that just happened to be designated as shared, common across multiple address spaces ... were staying in memory ... while higher usage pages were being removed. in both the SVS/MVS case they were taking information that was independent of actual usage as the basis for page selection ... and in the shared pages case they were taking information that was independent of actual usage for page replacement selection. in both cases, it resulted in sub-optimal page replacement selection ... and therefor led to an increase in overall paging activity.

the global LRU issue ran into another issue in the late '70s. about the time that I was doing the work on global LRU in the 60s ... there was some acm literature published on local LRU. in the late '70s somebody was trying to get a stanford PHD based on global LRU ... and there was quite a bit of resistance/push-back on granting the PHD based on the local/global LRU differences.
https://www.garlic.com/~lynn/subtopic.html#wsclock

somebody familiar with me work contacted me asking me to provide the undergraduate work that i did in the 60s on global LRU in support for a stanford PHD in the late 70s on global LRU.

it turns out that I had some extensive data for the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

360/67 running cp67 with global LRU. this was a 360/67 with 768k of real storage (about 104 4k pages of pageable space after fixed kernel requirements), mixed-mode interactive cms workload. etc. this provided subsecond response time for 75-80 users.

there was also an acm paper from the same period from the grenoble science center where they had taken cp67 and modified it for local LRU conforming very closely to the acm literature from the 60s. the grenoble machine had 1mbyte of real storage (about 154 4k pageable space after fixed kernel requirements), nearly identical workload as the cambridge workload, and provided very similar subsecond response time for around 35 users.

aka the cambridge system with global LRU and 2/3rds the available real storage for paging and twice the number of users provided approximately the same performance as the grenoble system running similar type of workload on same system and same hardware for half the number of users with local LRU.

anyway having the A/B global/local LRU comparison on same operating system with similar hardware and workload appeared to help tip the balance in favor of granting the stanford PHD (in part because it appeared that global LRU was so demonstrably superior to local LRU).

there was some similar type of conclusion when investigating disk cache circa 1980 that we did at SJR. we did some fancy instrumentation for collecting all disk record references on a number of different production systems. then the record use traces were run thru a simulator with lots of different caching strategies. one of the results ... was for a fixed/given amount of storage available for cache ... thruput was highest when the available storage for caching was at the most global point in the system. taking the fixed amount of storage and using it for a common (aka global) system-level cache provided more thruput than partitioning (aka local) the same amount of storage among the different channels, controllers and/or drives.

and for even more drift ... in the early 70s ... i had come up with a slight-of-hand modification on the global LRU clock replacement strategy. nominally LRU (whether global or local) will degrade to FIFO under pathelogical conditions. the slight-of-hand tweak to global LRU clock still looked, tasted and smelled the same ... but it had an interesting characteristic of degrading to RANDOM ... in the pathelogical cases when LRU algorithms nominally degraded to FIFO.

as previously mentioned there was a lot of I & D refenece trace work done at cambridge and used as input for a variety of simulators. One of the simulators had implemented the default system, global LRU replacement, a true, exact, LRU replacement (based on exact ordering of every storage reference), and this slight-of-hand thing. Normal system level implemented global LRU tended to avg. within 5-15 percent of exact LRU ... never quite being as good. The slight-of-hand variation tended avg. 5-15 percent better than exact LRU.

Part of the issue is that normal system operation tends to dynamically vary on a moment to moment basis as to its different operational characteristics. Real live system operation tended to rapidly and dynamically move between the different operational characteristics and had sporadic periods when standard LRU was doing pathelogical FIFO replacement.

As an aside ... the cambridge/grenoble (global/local LRU) system comparisons were with cambridge system running standard vanilla global LRU clock ... it wouldn't have been fair to thrown in this other enhancement into the comparison.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's mini computers--lack thereof

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's mini computers--lack thereof
Newsgroups: alt.folklore.computers
Date: Tue, 02 Aug 2005 17:07:07 -0600
"Rupert Pigott" writes:
To be blunt you have no clue what you are talking about and I am somewhat restricted in what I can tell you. I'll put it this way: The system *should* have been restricted to 10-20 batch sessions with maybe a couple of DBAs hacking away. In practice the box sat there with little work to do, so they allowed a large but select bunch of users to bang away at the machine. That box was shifting many terabytes *a day* in it's normal line of duty in *addition* to running 300 interactive sessions (mostly read-only so actual throughput hit on the batch was minimal).

A -10 style box and OS developed using the same level of tech. would have curled up and died quite frankly. The I/O subsystem would have killed it stone dead *before* the OS/SMP issues came into play. The EV6 was not fantastic at doing OS/SMP stuff, but it was still good enough to punt several Terabytes around in a day *in addition* to keeping 200-300 interactive users busy.

IIRC large EV6 boxes didn't share buses, memory was in effect multi-ported. EV7s took that to a new level, which AMD has also achieved with it's Opteron line. AFAIK -10s used shared bus, shared memory = big bottleneck that gets tighter with every CPU you add.


recent post in comp.arch about 75-80 user mixed-mode (interactive, batch) on 360/67 w/cp67
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?

and another post that if the number of users scaled linearly with cpu and real storage ... then the 3081 would have had a couple thousand users ... as it was, a lot of 3081s ran with 300-400 active users. this was the theme that over a period of years that the relative system disk thruput had declined by a factor of ten (or better) ... reference in an otherwise long (recent post)
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?

that while typical real storage and cpu increased by a factor of 50 or more ... that disk thruput increased by a factor of 4-5 (resulting in a relative system disk thruput deline of a factor of ten) ... and the typical number of interactive users increased roughly proportional to the disk thruput ... not the cpu or real storage resources.

in the 60s as an undergraduate i talked about dynamic adaptive scheduling being able to dynamically adapt to varying system thruput bottlenecks.

lots of past posts about long term trends appearing to be disk thruput constrained:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Thu, 04 Aug 2005 11:46:01 -0600
K Williams writes:
It wasn't just a marketing failure. AT&T didn't understand computers. They had telecommunications blinders on WRT reliability; small faults are allowed, but massive ones aren't. The computer biz (at least at that time) was the opposite, small errors were death, but massive outages weren't a disaster. IBM, likewise, didn't understand the telecommunications biz and mucked up its many attempts to compete in that arena.

in the late 80s there was some similar comments about the background of people that were responsible for OSI work in ISO (not so much telecommunications background, ... more specifically the references were to voice-grade point-to-point copper wire background). Also, n this period, govts, including US federal, were mandating the elimination of internet and tcp/ip to be replaced with OSI.

furthermore ISO had rule out that ISO and ISO-chartered standards bodies couldn't do standards work on protocols that violated OSI.

there was a proposal for high-speed protocol
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

to x3s3.3 .. the ISO-chartered ansi standard body that was responsible for protocols at level 3&4 in OSI.

it was rejected for violating osi:

1) hsp would go directly from level 4/5 interface (transport) directly to LAN interface. this bypassed level 3/4 interface (network) violating osi

2) hsp would support internetworking protocol ... internetworking didn't exist in osi ... and supporting something that didn't exist in OSI was violation of OSI.

3) hsp would go directly from level 4/5 interface (transport) directly to LAN interface. LAN interface sits somewhere in the middle of OSI level 3 ... violating OSI ... and therefor protocol that supported LAN interface (which violated OSI) was also violation of OSI.

=======================

a lot of mainframe world got wrapped around SNA ... basically a host-centric centralized computer communication operation (not a networking infrastructure) oriented towards managing large numbers of dumb terminals.

my wife did work on real networking architecture ... in the same time-frame that SNA was coming into being ... but got huge push-back from the SNA forces. couple minor references:
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5

she then got con'ed into POK to being in charge of loosely-couupled architecture ... where she put together Peer-Coupled Shared Data architecture. it did see a little take-up with ims hot-standby ... but really didn't come into its own until parallel sysplex
https://www.garlic.com/~lynn/submain.html#shareddata

although we did leverage peer-coupled infrastructure when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we also had sort of a funny anecdote from high-speed data transport work (project name specifically chosen to differentiate it from communication)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

we were contracting for some high-speed gear from the far east. the friday before a business trip to the far east (to look at some of the hardware) ... one of the people from SNA announced a new online discussion group on networking ... and provided the following definitions for basis of some of the discussion:


low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following monday on the wall of a conference room outside tokyo:

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         >600mbits

minor past references:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2003m.html#59 SR 15,15
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

I don't know if it was true or not ... but one of our vendors (in the US) claimed that somebody from AT&T had come by later and asked them to build a duplicate set of gear of whatever they were building for HSDT.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Fri, 05 Aug 2005 10:50:56 -0600
Morten Reistad writes:
All complex systems have massive failures. You may reduce the impact of them by duplicating; but once you get past the low-hanging fruit you are at slightly below three nines uptime.

there are fail-safe or fail-graceful issues ... how resilliant is the overall environment in the face of failures.

supposedly software (& human mistakes) took over from purely hardware failures starting sometime in the early 80s.

also, when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
... we coined the term disaster survivability and geographic survivability (to differentiate from disaster recover) ... as an increasing number of failures were environmental related (in part, as other types of failures declined).
https://www.garlic.com/~lynn/submain.html#available

for failure resilliant programming ... i've often commented that it can be ten times the work and 4-10 times the lines of code ... compared to the simple straight-line application implication. failure resilliant programming is also often characteristic of online or real-time service. when we were working on the payment gateway for what has since become e-commerce ... the failure resilliant programming was much more than the straight line application.

another part of failure resilliancy is to have significant amounts of redundant resources ... in telco this was frequently lumped into provisioning. one of the things that can happen is business go on cost-cutting measures that starts to eliminate some amount of the redundancy.

Complexity and/or paring back on redundancy can make the environment more vulnerabile to general outages and systemic failures.

then there is the story of a large financial institution that had computerized system for optimally moving money around ... that was located in a large 50-story office building. their claim was that the system earned more in 24hrs than the annual rent on the whole bldg. plus the aggregate annual salaries of everybody working in the bldg (or conversely they would loose that much money every 24hrs if the system was down).

misc. past references to 4-10 times for failure resilliant programming:
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Fri, 05 Aug 2005 23:35:37 -0600
Andrew Swallow writes:
NATO used to be a big user of OSI. TCP/IP levels 1 and 2 were designed for wire links and hit big problems when used over poor quality radio links.

part of that was oriented towards 60s technology w/o any FEC what-so-ever. osi keep things much more in lock step ... in part because of relatively high error rates ... if you were considering 60s state of the art.

however, in '85, i joked that I could get better quality technology out of a $300 cdrom player than I could get from $6k (lit1) computer modems at the time (fec, optical drivers, etc).

cyclotomics in berkeley was doing a lot with reed-solomon ecc ... for modems as well as having contributed to the cdrom standard.

there was a hack for really poor quality FM radio link ... do 15/16ths reed-solomon encoding with selective retransmit ... however instead of retransmitting the original packet (when reed-solomon ecc was insufficinet) ... transmit the 1/2 rate viterbi fec of the original packet (in a 15/16ths reed-solomon channel). if the number of selective retransmits got too high ... just switch to transmitting the original packet along with its 1/2 rate virturbi (within the reed-solomon channel) constantly ... until the bit-error-rate dropped to an acceptable thresh hold (somewhat trading off band-width against latency).

cyclotomics was later bought by kodak.

minor reference from the past ....

Date: Mon, 14 Apr 86 14:39:23 pst
From: Katie Mac Millen <macmilk@umunhum>
Subject: upcoming talk in ee380 - Wednesday, April 16
To: 380-dist@umunhum

EE380---Computer Systems Colloquium

Title: The Impact of Error-control on Systems Design

Speaker: Dr. Robert E. Peile
From: Cyclotomics

Time: 4:15 p.m. on Wednesday
Place: Terman Auditorium

Abstract

The need for correct, reliable data is increasing in proportion to the growth and dependence on data communications. However, data is transmitted or recorded over media that is becoming LESS reliable, for reasons that include frequency congestion, deliberate radio interference, increased data-rates or increased recording densities. Sophisticated error control offers a way of satisfying these competing demands. Moreover, the effect of error control can fundamentally change the trade-offs facing a system designer. This lecture presents the different types of error control that can be used and discusses their relative merits. The effect of well-designed error control on a system is illustrated with several examples including Optical Card technology, Packet-Switched data and Space-Telemetry data.

================


... snip ... top of post, old email index

misc. past mention of reed-solomon
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#210 AES cyphers leak information like sieves
https://www.garlic.com/~lynn/2000c.html#38 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001b.html#80 Disks size growing while disk count shrinking = bad performance
https://www.garlic.com/~lynn/2001k.html#71 Encryption + Error Correction
https://www.garlic.com/~lynn/2002e.html#53 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002p.html#53 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2003e.html#27 shirts
https://www.garlic.com/~lynn/2003h.html#3 Calculations involing very large decimals
https://www.garlic.com/~lynn/2003j.html#73 1950s AT&T/IBM lack of collaboration?
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004h.html#11 Mainframes (etc.)
https://www.garlic.com/~lynn/2004o.html#43 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005k.html#25 The 8008

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Sat, 06 Aug 2005 09:44:37 -0600
floyd@apaflo.com (Floyd L. Davidson) writes:
The simple difference was probably Vinton Cerf!

He became part of MCI, and they injected money and some smarts into GCI, and it has paid off.


note that MCI was part of the group that won the NSFNET backbone bid/RFP i.e. one of the things changing from technology to real internet operation. One of the characteristics of OSI and other technology was that they were targeted at relatively homogeneous networks.

a defining characteristic of the internetworking protocol was having gateways and being able to internetwork networks. Of course you had to have the technology supporting the internetworking of networks ... but you also had to create the actual operational implementation and deployment of internetworking of networks.

misc. NSFNET backbone RFP & award announcement
https://www.garlic.com/~lynn/2002k.html#12 nsfnet backbone RFP announcement
https://www.garlic.com/~lynn/2000e.html#10 announcements of nsfnet backbone award

we were operating a high-speed backbone at the time, but were not allowed to bid on the nsfnet backbone RFP. NSF did do a review what we we had operational. there was some letter that stated that what we had in operation was at least five years ahead of all the NSFNET backbone RFP responses (to build something new).
https://www.garlic.com/~lynn/internet.htm#0

lots of other NSFNET backbone refs:
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/2000c.html#78 Free RT monitors/keyboards
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#19 Comrade Ronda vs. the Capitalist Netmongers
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#73 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#45 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#85 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002k.html#12 old/long NSFNET ref
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002o.html#41 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2003c.html#11 Networks separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003g.html#36 netscape firebird contraversy
https://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#1 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004m.html#62 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Sat, 06 Aug 2005 10:16:50 -0600
oh, and the somewhat obligatory posting about the internal network being larger than arpanet/internet from just about the beginning until sometime in the mid-80s
https://www.garlic.com/~lynn/subnetwork.html#internalnet

at the time of the cut-over from homogeneous arpanet/internet host protocol to internetworking protocol, apranet/internet had around 250 nodes.

by comparison, a little later the same year, the internal network passed 1000 nodes.
https://www.garlic.com/~lynn/internet.htm#22

small sample of node update announcements from '83:


•                     Date Sent     7-29-83
•  Node              Connected   Machine   DIV    WHERE            OPERATOR
•                     To/How      Type                              NUMBER
+ BPLVM             * PKMFGVM/9  4341/VM   DSD  Brooklyn, N.Y.     8-868-2166
+ FRKVMPF1          * STFFE1/9   4341/VM   CSD  Franklin Lakes, NJ 8-731-3500
+ FUJVM1            * FDLVM1/4   3033/VM   AFE  Fujisawa, Japan
+ GBGMVSFE          * GBGVMFE3/C GUEST/MVS FED  Gaithersburg, Md.  8-372-5808
+ LAGVM5            * LAGM3/9    168/VM    CPD  La Gaude, France
+ LAGVM7            * LAGM1/9    3032/VM   CPD  La Gaude, France
+ MVDVM1            * BUEVM1/2   4341/VM   AFE  Montevideo,Uruguay 98-90-17
+ RALVMK            * RALVMA/C   4341/VM   CPD  Raleigh, N.C.      8-441-7281
+ RALVMM            * RALVS6/C   4341/VM   CPD  Raleigh, N.C.      8-227-4570
+ RALVMP            * RALVM2/C   3081/VM   CPD  Raleigh, N.C.      8-442-3763
+ RCHVM1PD          * RCHVM1/C   4341/VM   SPD  Rochester, Mn.
+ RCHVM2            * RCHVM1/C   4341/VM   SPD  Rochester, Mn.
+ RCHVM3            * RCHVM1/C   4341/VM   SPD  Rochester, Mn.
+ SJMMVS16          * SNJMAS2/9  4341/MVS  GPD  San Jose, Ca.      8-294-5103
+ SJMMVS17          * SNJMAS1/9  4341/MVS  GPD  San Jose, Ca.      8-276-6657
+ TUCVMJ            * TUCVMI/5   148/VM    GPD  Tucson, Arizona    8-562-7100
+ TUCVMN1           * TUCVM2/C   4341/VM   GPD  Tucson, Arizona    8-562-6074
+ UITECVM1          * UITHON2/9  4341/VM   EMEA Uithoorn, Netherlands

•                    Date Sent    12-15-83
•  Node              Connected   Machine   DIV    WHERE            OPERATOR
•                     To/How      Type                              NUMBER
+ ADMVM2            * ADMVM1/9   4341/VM   EMEA Amsterdam, Neth.   20-5133034
+ BOEVMN            * BOEVM1/9   4361/VM   SPD  Boeblingen, Ger. 49-7031-16-3578
+ BRMVM1            * MTLVM1/9   4341/VM   AFE  Bromont, Canada    514-874-7871
+ DUBVM2            * RESPOND/4  3158/VM   EMEA Dublin, Ireland    785344 x4324
+ ENDVMAS3          * ENDCMPVM/C 3081/VM   GTD  Endicott, N.Y.     8-252-2676
+ KGNVME            * KGNVMN/C   3081/VM   DSD  IBM Kingston, N.Y.
+ KISTAINS          * KISTAVM/9  4341/VM   EMEA Stockholm, Sweden
+ MDRVM2            * MDRVM1/9   3031/VM   EMEA Madrid, Spain      34-1-4314000
+ MSPVMIC3          * MSPVMIC1/9 4341/VM   FED  Minneapolis, Minn. 8-653-2247
+ POKCAD1           * PKDPDVM/9  4341/VM   NAD  Poughkeepsie, N.Y. 8-253-6398
+ POKCAD2           * POKCAD1/C  4341/VM   NAD  Poughkeepsie, N.Y. 8-253-6398
+ SJEMAS5           * SNJMAS3/S  4341/MVS  GPD  San Jose, Ca.      8-276-6595
+ TOKVMSI2          * FDLVM1/9   3031/VM   AFE  Tokyo, Japan

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Sat, 06 Aug 2005 15:09:15 -0600
"Lars Poulsen (impulse news)" writes:
This is the foundation of the big divide between circuit thinking and packet thinking. Circuit thinkers are not stupid, they were just raised differently. Ironically, the packet switches deep in the Internet backbone have adopted many of the design elements of the circuit switching fabric; with switching paths hardware controlled by tables that are occasionally refreshed from computations on s specialized routing computation server board in the multiprocessor complex.

when we were talking to various of the people about the nsfnet deployments ... we didn't spend a lot of time going into details about telco provisioning issues ... since they really weren't interested in real industrial strength service and all that gorp
https://www.garlic.com/~lynn/2005n.html#28 Data communications over telegraph circuits

however, when we were asked to work with this small client/server startup that wanted to do payments on their server (frequently now referred to as e-commerce):
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we felt compelled to really get into those issues.

one of the characteristics were that they were taking message formats from a circuit-based world and deploying them in packets ... and assumed that would be sufficient. however, the standard packet deployed environments had extremely primitive diagnostic and industrial strength service operational facilities.

as part of deploying payment services and payment gateway ... we went back to ground zero ... and itemized/catalogued the recovery and diagnostic processes used in standard telco provisioned, industrial strength circuit-based infrastructure. (little things like being expected to do end-to-end, first level problem diagnostic and determination within five minutes). for every one of those opertaional characteristics that didn't exist in the normal packet based environment of the period ... we had to invent equivalent processes and document their use (as part of an expected industrial strength service environment).

very early in the process there was some initial testing and some sort of outage resulting in call to the trouble desk and opening a trouble ticket. then followed 3hrs of investigation and finally closing the trouble ticket as NTF (no trouble found) ... which was a totally unaceptable from standard operating procedureds.

slightly related recent reference to the activity
https://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits

the issue wasn't that tcp/ip didn't have useful characteristics ... it was that its feet had never been held to the fire to evolve real business critical operational characteristics.

we even gave a talk about how tcp/ip of the period lacked real industrial strength operational characteristics (and some of the compensating processes that we had to invent) to a combined ISI/USC graduate student seminar along with the RFC editor organization (physically located at ISI).
http://rfc-editor.org/
http://www.isi.edu/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Sun, 07 Aug 2005 08:01:13 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
And it fails on the other test. When manipulating such things, you almost never want the image you are working on demand-paged, with a 10 mS delay for each page fetched. You want images swapped in and out in their entirety - whether by 'normal' I/O or the system.

this is one of the places where tss/360 fell apart on 360/67. the os/360 model (with real memory) was partition large applications (like compilers) into effecient execution and transfer units ... and do large block i/o from one unit/phase to the next.

tss/360 went to extreme (simplistic) memory mapped paradigm ... where such large applications were laid out in a single large memory mapped file ... and then left to demand page fault (4k page at a time).

cms had done pretty much straight port of a lot of the os/360 compilers into the cms environment .... keeping their phased/unit organization. architecture was layered ... so a lot of cms still treated its virtual address space as if it were real memory ... with cp67 underneath providing virtualization & virtual memory function. cms filesystem did do the incremental block allocation that is characteristic of a lot of more familiar systems ... rather than the large contiguous space allocation that is characteristic of the os/360 genre of systems. However, the program load functions still attempted to do whole program phase/unit loading ... with attempted single I/Os with multi-block transfers.

in the early '68 time-frame (original cp67 installed at the university before i had done any performance improvements) we did some simulated fortran edit, compile, and execute workload with tss/360 and cp67/cms on the same hardware. tss/360 had multi-second interactive response with four emulated terminal/users running the workload. cp67/cms would have approx. 1second interactive response with 30 emulated terminal/users running the same workload.

when i implemented a memory-mapped layer for the cms filesystem,
https://www.garlic.com/~lynn/submain.html#mmap

i also added function that would attempt contiguous record allocation ... standard cms filesystem had left it to incremental record at a time (which resulted in the multi-record load being dependent on whether the original incremental allocation happened to be sequential). My initial page mapping ... would also attempt to do single I/O multi-page fetch (if there were sufficient resources available) rather than simply leaving it to demand paging.

this month, the share, customer user group organization, is having its 50th anniv. meeting in boston.
http://www.share.org/

at the fall '68 Atlantic City share meeting, i had presented the result of some of the performance work i had done on cp67/cms during the spring and summer of 68; a couple old references to that presentation:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

which happened to somewhat focus on os/360 student fortran batch workload running in a virtual machine under cp67.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Why? (Was: US Military Dead during Iraq War

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why? (Was: US Military Dead during Iraq War
Newsgroups: alt.folklore.computers
Date: Sun, 07 Aug 2005 10:34:09 -0600
jmfbahciv@aol.com wrote:
When they counted "resources", did it include when somebody had to clear up a mess? Perhaps this is simply my working style, but if I have to clean up a mess, I need a datetime stamp to I know where to start cleaning up. Do I need to redo every command since last year, last month, last week, yesterday, two minutes before the system crashed?

This is extremely important if a system is servicing _human_ users. I think batch is a completely different circumstance but you would know more about that than I would.


ref:
https://www.garlic.com/~lynn/2005n.html#20 Why? (Was: US Military Dead during Iraq War

resources in the particular context were the resources to do the implementation .... aka instead of standard unix port to the real machine interface ... they implemented a VRM abstraction ... which cost resources to build. they then did the unix port to the VRM abstraction ... which appeared to turn out to take significantly more resources than doing a real machine port (contrary to original claims).

the other resource context was the machine resources (and instruction pathlengths) to execute the journal filesystem function. the original implementation relied on the database memory function available 801 ... so that (128 byte) blocks of memory could be identified as changed/non-changed for the purpose of logging. this avoided having to put calls to the log function for every bit of metadata changes as they occurred. supposedly the database memory was to minimize resources need ed to modify existing applications (for logging) as well as the resources (in terms of instruction pathlengths) to execute the logging function. the palo alto work seemed to demonstrate that 1) modifying the unix filesystem code to insert explicit logging function calls wasn't a major effort (which they had to do to port to processors w/o the 801 database memory function) and 2) the explicit calls actually resulted in shorter pathlength when the function was executing.

with respect to filesystem logging ... the logs have tended to have max size on the order of 4-8 mbytes. keep all changed metadata records in memory until after the related changes have been written to the log, then allow metadata records to migrate to their disk positions (which can be scattered all over the disk surface requiring a large number of independent disk writes). when a set of changes have consistently migrated to disk ... mark those log entries as available. if log fills before that happens, stop things until log entries become available for re-use.

restart/recovery then tends to have a max redo of 4-8mbytes worth of changes applied to the related metadata records. recovery reads the full log ... determines which entries may correspond to metadata that might not have made it to disk (still resident in memory at the time the system failed/shutdown), read those metadata records from disk and apply the logged changes.

the log methodology basically allows the metadata records to be written asynchronously, possibly in non-consistent sequence (say disk arm optimization order rather than metadata consistency order) ... while preserving the appearance of consistent ordering (in case of system shutdown/failure in the middle of incompletely writing some set of metadata records to disk).

the number of active records in the log impacts the restart/recovery time ... but too severely minimizing the number of active records can increase the slow-down points when metadata synchronization to disk has to occur (can possibly get into a situation where there is some trade-off between recovery/restart time against general thruput). An objective is to not noticeably slowdown general thruput while making restart/recovery time still nearly instantaneous.

X509 digital certificate for offline solution

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: microsoft.public.dotnet.security
Subject: Re: X509 digital certificate for offline solution
Date: Sun, 07 Aug 2005 10:11:27 -0700
Gaja wrote:
Hi, I have a situation. We need to encrypt database connection information like server name, user id and password. I was thinking of X509 digital certificate and cryptography. Unfortunately this should work as a offline solution. I have used digital certificates that was verified by certificate authority. Being an offline solution I dont know if the X509 digital certificate can be used by cryptography services to encrypt. Can anyone help me please, or suggest me a good way to encrypt user id and passwords in a offline solution.

Thanks,


the whole point of digital certificates was to provide authentication in an offline environment where the two parties had no previous communication. it is usually possible to trivially demonstrate that in any kind of online environment and/or environment where the parties have existing, ongoing relationships ... that digital certificates are redundant and superfluous.

the core technology is asymmetric key cryptography ... a pair of keys, what one key encodes, the other (of the key pair) decodes. This is in contrast to symmetric key cryptography where the same key both encrypts and decrypts.

a business process has been defined call public key ... where one of an asymmetric key pair is labeled as public and made freely available, the other of the key pair is labeled private, kept confidential and never divulged.

another business process has been defined called digital signature. basically callculate the hash of a message or document and encode it with a private key. The message and digital signature are transmitted together. The recipient recalculates the hash of the message, decodes the digital signature (with the corresponding public key) and compares the two hashes. If they are the same, then the recipient can infer:

1)) the message hasn't been altered (since the original digital signature) 2) something you have authentication, aka the digital signature originated from somebody that had access and use of the corresponding private key

in the case where parties have ongoing relationship ... they can have the other party's public key stored in their trusted public key respository. in the online situations, parties may have online access to trusted repository of public keys. In either case, there is no need to have digital certificates.

Digital certificates are a business process to somewhat address the "letters of credit" authentication paradigm from the sailing ship days. They were targeted at the offline email environment of the early 80s, the local (electronic) post-office was dialed, email exchanged, connection hung up ... and the recipient then might be dealing with first time communication from a complete stranger.

The solution (from the "sailing ship" days before electronic, online communication) was to have individuals extend their trusted public key repositories to include the public keys of Certification Authorities. Strangers now take information, their public key (and hopefully proof they have access to the corresponding private key) to a certification authority. The certification validates the supplied information and creates a digital certificate that contains the person's supplied information and their public key ... which is digitally signed by the Certification Authority.

Now in the case of first-time communication from a complete stranger, they digitally sign the message, and transmit the message, their digital signature and the digital certificate (they acquired from the certification authority).

The recipient, instead of directly verifying the stranger's digital certificate (using a public key from their trusted public key repository), verifies the certification authority's digital signature (on the supplied digital certificate) using the certification authority's public key (which has hopefully been preloaded into the recipient's trusted public key repository). If the certification authority's digital signature verifies (on the supplied digital certificate), then the recipient can take the originator's public key (from the supplied digital certificate) and use it to verify the digital signature on the message.

The objective is that the recipient can trust the certification authority's digital signature and also finds the ancillary information in the certificate (and certified by the certification authority) meaningful and useful.

One of the problems from the early 90s was that of what information might be useful in an x.509 identity certificate. In most cases, the certification authorities wouldn't know who the future recipients a stranger might wish to communicate ... and therefor what information that they might find useful. There was some direction to grossly overload the x.509 identity certificates with enormous amounts of personal information.

By the mid-90s, some institutions were starting to realize that x.509 identity certificates, grossly overloaded with personal information represented significant privacy and liability issues. As a result, you saw some institutioins retrenching to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

basically containing some sort of database lookup value and a public key. The recipients were also the certification authority, they basically registered an individuals public key in an existing online institutional relationship management infrastructure and issued a digital certificate containing an index to the corresponding entry in the relationship management infrastructure.

the issue was that it became trivial to demonstrate that relying-party-only certificates were redundant and superfluous, in part because they violated the basic original premise justifying digital certificates ... first time communication between two strangers. more fundamentally, it was trivial to show that the recipient (relying-party) already had all the information contained in the digital certificate ... and therefor the digital certificate provided no information that the recipient didn't already have.

another aspect, was in some of the financial transaction scenarios, the replying-party-only certificates also represented enormous payload bloat (in addition to being redundant and superfluous). A overhead for a typical relying-party-only certificate of the period could be 4k-12k bytes. The typical financial transaction message size is on the order of 60-80 bytes. The payload bloat overhead of appending relying-party-only certificates to such messages was on the order of one hundred times (for something completely redundant and superfluous).

To address this issue there was some activity to define a compressed digital certificate (hoping to get into the 300-600 byte range). In this particular situation, instead of showing that it wasn't necessary to append a redundant and superfluous relying-party-only digital certificate ... it was possible to demonstrate that a perfectly valid compression technique was to eliminate all fields from the digital certificate that were known to be in the possession of the recipient. Since it could be demonstrated that all fields in a digital certificate were already possessed by the relying party, it was possible to compress such a digital certificate to zero bytes. So rather than demonstrating that it wasn't necessary to append a redundant and superfluous digital certificate ... it was also possible to demonstrate that a zero-byte digital certificate could be appended.

Data communications over telegraph circuits

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data communications over telegraph circuits
Newsgroups: alt.folklore.computers
Date: Mon, 08 Aug 2005 06:41:39 -0600
ref internet industrial strength data processing
https://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#30 Data communications over telegraph circuits

a trivial example was one of the first major sites for original payment gateway
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

was a sports oriented operation ... which did some national advertisements on sunday afternoon football.

they had a major isp that frequently did down-time on sunday afternoon for various maintenance activities (aka no teleco provisioning). if you were a commercial high-speed customer of theirs, you would get email bulletins about which cities were having sunday afternoon outages for maintenance activities (30min-3hr periods).

so for the payment gateway ... we could mandate that the server code attempted the connection using multiple a-record support.

the payment gateway had HA-configuration ... some of it was background having done the ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

and multi-homed support with multiple connections into different ISPs at different major points into the internet (... one of the isps had (48-volt) router co-located at telco facility). this was also in the period that the internet routing policy transitioned to hierarchical.

we attempted to get the browser group to also support multiple a-record for connections ... but for whatever reason ... that took another year.

in any case, this being one of the first major e-commerce sites, it turned had out their ISP had a couple sunday afternoon (sheduled) outages ... when they were expecting major activity (because of their national sunday afternoon football advertisements).

giving talks during the period about the internet not oriented towards industrial strength data processing ... i was somewhat surprised at the number of the audience who would break in and parrot something (from some univ. course?) about tcp being a reliable protocol.

past mentions of multiple a-record support
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#158 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#159 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2003c.html#8 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#12 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#24 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#25 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment
https://www.garlic.com/~lynn/2003.html#30 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2003.html#33 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2004k.html#32 Frontiernet insists on being my firewall
https://www.garlic.com/~lynn/2004o.html#53 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005f.html#55 What is the "name" of a system?
https://www.garlic.com/~lynn/2005g.html#21 Protocol stack - disadvantages (revision)
https://www.garlic.com/~lynn/2005n.html#5 Wildcard SSL Certificates

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PART 3. Why it seems difficult to make an OOO VAX competitive

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PART 3. Why it seems difficult to make an OOO VAX competitive
(really long)
Newsgroups: comp.arch
Date: Mon, 08 Aug 2005 14:09:21 -0600
"Eric P." writes:
Or they could move the absolute addresses out of the code into a table and use PC relative indirect @ddd(PC) to access global variables.

This would probably be a better approach because the loader would only need to patch a contiguous set of entries in a table, thus modifying only a few code pages, rather than addresses scattered all over the code pages as each patch generates a CopyOnWrite copy. This would be much better for shared code libraries.


so the os/360 real memory heritage was to patch all over every place ... using "RLD" entry convention output from the compilers and assemblers. then later with virtual memory and "shared code" pages ... any image with all the "RLD" entries pre-swizzled ... resulted in address dependent location in the virtual memory.

in early 70s when i started doing page-mapped stuff for the cms filesystem
https://www.garlic.com/~lynn/submain.html#mmap

I also did a lot of work on reworking applications to work as page mapped virtual memory images in (read-only) shared segments. i had to go thru all sorts of contortions trying to create address independent images ... when the standard was that images were address dependent. lots of past posts about hacking the standard RLD/adcon (relocatble directory & address constant) convention:
https://www.garlic.com/~lynn/submain.html#adcon

at least that was one thing that tss/360 had done ... creating a convention for address independent images ... while the os/360 genre had convention of address dependent images (which has pretty much carried forth to this day ... and you have to resort to all sorts of hacks to try and compensate).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch
Date: Tue, 09 Aug 2005 12:51:02 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
As far as swapping is concerned, paging gave a bandwidth of about 250 KB/sec, assuming no scheduling delays. Swapping gave about 1 MB/sec times the number of disks used. The current figures are more like 650 KB/sec and 40 MB/sec times the number of disks. Yes, you really CAN swap a gigabyte in less time than you can page in a megabyte!

the internal network was larger than the arpanet/internet from just about the beginning until possibly mid-85. I've contended this in part was because the internal network had gateway type capability in every node from early on ... while the internet didn't get it until the great conversion to internetworking protocol on 1/1/83. at the time of the switch-over, the arpanet/internet had about 250 nodes ... while the internal network passed 1000 nodes a little later that year.
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the issues ... was that the primary network capability ran in vritual address space, was (internally) multi-threaded, but used the operating system "spool" file facility for storage at intermediate nodes. the systems "spool" file facility emulated unit record devices externally ... but internally used the paging subsystem ... and so had some operational characteristics of demand page operation.

this had the unfortunate characteristics of serializing the networking virtual address during disk/page (4k byte) transfers. On a large, heavily loaded system with lots of other use of the spooling system ... any virtual address space would be lucky to get 5-10 4k transfers/sec (20kbytes to 40kbytes).

arpanet required dedicated IMPs and dedicated 56kbit links for minimum network connection. another factor in the ease of the internal network growth is that you could bring up the network service on existing system and just install lines and (usually faster) modems (than what was being used for dial-up terminals).

the gateway like functionality and shared use of existing resources made it much easier to expand the internal network (vis-a-vis the arpanet environment of the 70s) ... but also resulted in network server bottlenecking with the use of shared resources.

the high-speed data transport project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

was putting in multiple T1 links (originally clear channel moving to 193rd bit stuff required some new hardware) and higherspeed links. A system might have multiple full-duplex T1s (150kbyte/direction, 300kbyte/link full duplex) and could see severe bottlenecks with the possibly 20kbyte to 40kbyte aggregate thruput threshold. This was over and above using HYPERchannel for local campus connectivity (50mbit/sec ... aggregate, potentially several mbytes/sec for a machine operating as intermediate node).

so for HSDT ... I had to expand the intermediate node "spool" interface (sort of cribbed from demand paging) ... to support asynchronous operation multi-block reads & writes, read-ahead, write-behind as well as contigurous allocation (with some slight of hand that smack of log structured filesystem operation .... minimizing changes to filesystem-level metadata). i also liberially scavenged the page-mapped interface that I had done for the cms filesystem.
https://www.garlic.com/~lynn/submain.html#mmap

asynchronous page fault operation for multi-thread applications ... isn't a new idea ... one of the universities had modified cp67 & did some low-level os/360 MVT code to support reflecting a page fault was in process for a MVT thread ... allowing the MVT supervisor the opportunity to switch threads. a form of this was officially released as part of VS/1 handshaking ... in conjunction with the release of the ECPS microcode virtual machine support (the objective was potentially to convert all 138/148 installations ... and later 4331/4341 to virtual machine environment).

slightly related recent networking posts
https://www.garlic.com/~lynn/2005n.html#25 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#27 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#28 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#29 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#30 Data communications over telegraph circuits

misc. posts on spool file system enhancements for hsdt:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2004g.html#19 HERCULES
https://www.garlic.com/~lynn/2004m.html#33 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#3 History of C
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What was new&important in computer architecture 10 years ago ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was new&important in computer architecture 10 years ago ?
Newsgroups: comp.arch
Date: Wed, 10 Aug 2005 10:41:47 -0600
"kus@free.net" writes:
I'm looking what was especially "new & important" in computer architecture (I don't say about "pure researches") 10 years ago, in 1995.

1) Introducing of Pentium Pro : it was begin of wide using of x86 chips for serious server applications (and x86 Beowulf clusters later).

2) Clear win of 64-bit RISC chips: to Alpha 21164 it was added SGI/MIPS R8K/90 Mhz (75 Mhz started at 1994?) and PA-8000

3) Start of win of RISC SMP-servers over vector mini-supercomputers (Cray J90) or even over large vector systems (as Cray C90 - taking into account price/performance ratio).

This win was based on SMPs like SGI Power Challenge, DEC 8200/8400 and Convex SPP servers (btw, ccNUMA in large configurations; Sequent, if I'm correct, proposed ccNUMA on quad Pentium Pro boards).


.. slightly earlier ...

here is reposting of Summary of Spang RObinson Report on Sumpercomputing and Parallel Processing (from jan. 88)
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?

another post on some market segment overviews from 92
https://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines, Supercomputers

note that SCI was picking up in the early 90s ... with 64-port memory interface ... convex did exemplar with 64 two-way HP-RISC processor boards and both sequent and DG doing 64 four-way intel processor boards.
http://www.scizzl.com/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What was new&important in computer architecture 10 years ago ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was new&important in computer architecture 10 years ago ?
Newsgroups: comp.arch
Date: Wed, 10 Aug 2005 12:04:42 -0600
"Del Cecchi" writes:
NUMA and clusters. That is about the time frame they became commercial. Sysplex. etc.

my wife had laid the ground work for sysplex with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

when she was con'ed into going to pok to be in charge of loosely-coupled architecture.

one of the reasons that we did ha/cmp cluster work
https://www.garlic.com/~lynn/subtopic.html#hacmp
and
https://www.garlic.com/~lynn/95.html#13

was because the chipset we had to work with had no provisions for memory consistency (modulo the 4-way RSC which had a hack that storage areas designated as "shared" wouldn't be cached).

however, had worked on some number of smp implementations over the previous couple decades
https://www.garlic.com/~lynn/subtopic.html#smp
and
https://www.garlic.com/~lynn/submain.html#bounce

including later getting to spend some amount of time at sci meetings.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Uploading to Asimov

Refed: **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: comp.sys.apple2,comp.emulators.apple2
Subject: Re: Uploading to Asimov
Date: Wed, 10 Aug 2005 19:06:31 -0700
Paul Schlyter wrote:
A digital signature can be forged as easily as an MD5 hash. To really guard against deliberate tampering with the data, you need more than that: you need a PKI, i.e. a digital certificate signed by a CA you trust.

you need to be able to validate a digital signature with a public key that you trust. this can be totally independent of whether you involve digital certificates and/or certification authorities.

the technology is asymmetric key cryptography; what one key (of a key-pair) encodes, the other key decodes. this is to differentiate from symmetric key cryptography where the same key both encrypts and decrypts.

there is a business process called public key ... where one key (of a key pair) is designated as public and freely distributed. the other key (of a key-pair) is designated as private, kept confidential and is never divulted.

there is a business process call digital signature ... where the hash of some message or document is calculated and then encoded with a private key. the recipient then can recalculate the hash, decode the digital signature with the (corresponding) public key and then compare the two hashes. If they are the same, then the recipient can conclude:

1) the contents have not been modified since the digital signature was generated 2) something you have authentication, aka the originator had access and use of the corresponding private key.

the normal public key infrastructures have recipients keeping trusted public keys in their own trusted public key repository and/or accessing an online trusted public key repository.

digital certificates somewhat were targeted at the offline email environment of the early 80s (a paradigm somewhat analogous to the "letters of credit" from the sailing ship days). the recipient dials their local (electronic) post office, exchanges email, hangs up and then is potentially confronted with first time email with a total stranger.

Digital certificates provided a means for a recipient to determine something about the originating stranger in first time communication. Institutions called certification authorities were defined that generated digital certificates. An applicant provided some information to the certification authority along with their public key. The certification authority validated the information and the public key and loaded it into a message called a digital certificate ... which was digitally signed by the certification authority.

In the first time communication with a stranger scenario ... the sender generates a message, digitally signs the message ... and then transmits a combination of 1) the message, 2) their digital signature, and 3) their digital certificate.

The recipient now first process the certification authority's *message* ... aka the digitally signed digital certificate. They hopefully have a copy of the certification authorities public key available to them to validate the digital signature (on the digital certificate ... just like they would do normal digital signature validation using public keys from their trusted public key repository). If the digital signature on the digital certificate (message) validates ... then the recipient can retrieve the (sender's) public key from the digital certificate and validate the digital signature on the transmitted message.

There is a fundamental issue that the whole certification authority and digital certificate infrastrucutre is based on the recipient having access to trusted public key repository for validating digital signatures (which can be the direct digital signatures or they can be the certification authorities digital signature on the stylized messages called digital certificates).

Fundamentally the infrastructure is based on the ability to validate digital signatures on messages w/o the requirement for having certification authorities and digital certificates. Once you have an infrastructure of trusted public key repositories and the ability to directly validate digital signatures ... then you can incrementally add the stylized digitally signed messageds (called digital certificates created by certification authorities) which can be validated using the underlying infrastructure for directly validating digital signatures (in this case the certification authorities digital signatures on digital certificates ... to address the first time communication between strangers scenarios).

In the early 90s, certification authorities were looking at generating x.509 identity certificates ... where the information included in the digital certificate was identity information. In many cases, the certification authorities couldn't completely predict what set of identity information a recipient (also called relying party) might be interested in. As a result, there was somewhat of a direction to grossly overload a x.509 identity certificate with enormous amounts of personal information.

By the mid 90s, some institutions were realizing that x.509 indentity certificates, grossly overloaded with enormous amounts of personal information represented significant privacy and liability issues. These institutions somewhat regressed to something called a relying-party-only certificate ... basically containing somesort of database index and a public key.
https://www.garlic.com/~lynn/subpubkey.html#rpo

However, it became trivial to demonstrate that relying-party-only certificates were redundant and superfluous. If an institution already has a long established relationship management infrastructure that they use as a repository of information about the parties they deal with .... then they can include the party's public key in the same repository.

By its nature, a relying-party-only certificate implies that the recipient, rather than obtaining the information about the originating party from the digital certificate ... they instead obtain the information from their long established relationship management information (the digital certificate only contains a pointer to the entry in the relationship management infrastructure). However, in any sort of business process, the actual digitally signed message will have also indicated who the originator is ... allowing the same entry to be accessed w/o the use of the digital certificate. Futhermore any digital signature on the actual message can be validated with the public key registered in the relationship management infrastructure.

The corollary to such digital certificates being redundant and superfluous is that their design point is for first time communication between two strangers (the letter's of credit paradigm from the sailing ship days) where the recipient has no other recourse for information about the sender (either online or offline). By definition, the relying party scenario ... the recipient already has an established relationship with the originator ... invalidating the fundamental design point originally established as to the purpose of digital certificates.

In any case, the fundamental building block for public keys and digital signatures ... is the recipient has access to a repository of trusted public keys. In the direct scenario, the recipient has the originator's public key and directly validates the digital signature on the message. In the scenario to address the first time communication between strangers ... the recipient's trusted public key repository includes some number of certification authority public keys ... for use in validating the certification authority digital signatures on the stylized messages called digital certificates.

You might be a mainframer if... :-) V3.8

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: You might be a mainframer if... :-) V3.8
Newsgroups: bit.listserv.ibm-main
Date: Wed, 10 Aug 2005 20:22:14 -0600
this is part of a presentation that i gave as an undergraduate at the Atlantic City share meeting in fall 68
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

I had carefully reworked MFT stage2 sysgen so 1) I could run it in production jobstream and 2) the dataset and pds member statements were carefully ordered to improve the disk arm seek operation. The careful ordering of datasets and members optimizing disk arm seek operation improved elasped time thruput for typical university workload by approx. a factor of three times.

Some people from the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had come out and installed a copy of cp67 in jan. of 68 ... and during the late spring and summer of '68, i rewrote significant parts of the cp67 kernel.

On a univ MFT jobstream that took 322 secs to run on the bare machine, Running the same MFT jobstream in a CP67 virtual machine, the original CP67 overhead added 534 seconds of cpu processing. By fall of 68, I had gotten that CP67 overhead down to 113 cpu seconds.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moz 1.8 performance dramatically improved

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moz 1.8 performance dramatically improved
Newsgroups: netscape.public.mozilla.performance,netscape.public.mozilla.general,netscape.public.mozilla.browser
Date: Thu, 11 Aug 2005 10:40:49 -0600
Robert Lindsay writes:
I must say, the performance of Moz 1.8 Beta 1 over Moz 1.7.2 is really dramatic. It was getting to the point where Moz 1.7x was truly unusable. Is the Moz 1.8 trunk identical to the FF 1.1 trunk? I have no idea what they have done to make this app work better but it will really put a smile on your face. If you are having problems with Moz 1.7x slowness, bloatedness, freezing, refusing to respond, Windows offering to kill Moz "program is not responding", Moz slowing all other apps to a crawl, painfully slow redraws with Moz, and all of these problems seemingly having no possible solution, pls consider upgrading to Moz 1.8x. Although it is a beta, I have had no problems at all with it so far; in fact, it works much better than Moz 1.7x.

i've been using a tab folder bookmark of 125 or so URLs ...

with 1.8 (and seamonkey, up thru yesterday's nightly build) I've been able to click on the bookmark folder and then possibly click on another 100-150 URLs (250-300 tabs total) before I start seeing significant slowdown ... aka clicking on an additional background tab URL will start locking up the foreground window for several seconds ... also periodically getting a popup complianing about some script being hung/non-responding.

with 1.7 (thru 1.7.11) can start seeing the slowdown after only 10-15 additional background tab URLs ... after bringing up the tab folder bookmark (130-140 background tabs total).

also the initial browser "lockup" processing the 125 background tabs at once is shorter with 1.8 than 1.7.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moz 1.8 performance dramatically improved

From: Anne & Lynn Wheeler <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.browser
Subject: Re: Moz 1.8 performance dramatically improved
Date: Thu, 11 Aug 2005 10:40:49 -0600
--text follows this line--
dell dimension 8300, 4gbytes ram (although linux monitor only says there are 3.5gbytes), single intel 3.4ghz processor with multi-threading enabled ... running (two-processor) fedora fc3 smp kernel (although mozilla is only using single thread), sata raid controller with two 250gbyte sata drives.

before i do the bookmark thing and get to 300 tabs ... monitor will typically say that there are 250-400 mbytes ram in use (& no swap). after hitting 300 tabs ... it may hit 500-700mbytes (and no swap) ... aka adds about 1mbyte/tab. i really have to strain to get it to hit 1gbyte of in-use ram.

the ram use numbers slightly changed when fc3 went from a 2.6.11 kernel to a 2.6.12 kernel.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X509 digital certificate for offline solution

Refed: **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.dotnet.security
Subject: Re: X509 digital certificate for offline solution
Date: Fri, 12 Aug 2005 04:56:14 -0700
Valery Pryamikov wrote:
Lynn,

Cool to see you in microsoft.public.dotnet.security :D

The fact that PKI failed to be universal solution to secure communication, doesn't diminish fact that PKI has certain quite tangible value for certain scenarios, like supporting Enterprise wide key-management and secure communications related tasks. There are plenty situation when non-PKI key-management infrastructures and protocols simply can't be used due to inherent limitations -- e.g. Kerberos fails to work with occasionally connected systems (e.g. connected once per week or so).


there are lots of scenarios where it is possible to have institional management of public keys for authentication purposes ... especially as part of overall online operational infrastructure.

PKI, certification authorities, and digital certificates ... were specifically designed to address the offline, disconnected, first-time communication between strangers where the recipient (relying-party) had no other recourse for information regarding the party they were dealing with.

The original pk-init draft for kerberos just had public keys registered in lieu of passwords ... and performing digital signature verification (in lieu of password comparison). It wasn't until later that they also allowed for entity totally unknown to kerberos to be able to present a digital certificate as part of authenticated to kerberos.
https://www.garlic.com/~lynn/subpubkey.html#kerberos

Possibly one of the most prevalent internet oriented authentication function is radius (i.e. in use by the majority of ISPs for authenticating their clients when they connect). This has been primarily a password based infrastructure ... however there have been radius enhancements where public keys are registered in lieu of passwords and digital signature verficiation is done in lieu of password checking
https://www.garlic.com/~lynn/subpubkey.html#radius

in both cases it is possible to integrate public key authentication into the permission and overall relationship management infrastructure w/o having to resort to a redundant and superfluous, duplicate relationship management infrastructure represented by PKI, certification authorities, and digital certificates.

The basic issue issue for public key and digital signatures is straight-forward authentication ... integrated into overall existing business practices that manage the entity, their access, their permissions, etc.

Typically, PKIs have represented independent infrastructures, independent of overall system operation. However, it is possible to do straight-forward public key and digital signature authentication integration into existing infrastructures w/o having to resort to redundant and superfluous PKIs, certification authorities, and digital certificates.

A simple sanity check:

if the system infrastructure has some table of entities along with related authorizations and permissions ... that is critical to the operation of the system ... then it is possible to add public key and digital signature verification to that infrastructure w/o having to resort to an independent PKI, certification authority, and digital certificates.

the PKI, certification authority, and digital certificates were targeted at infrastructures that didn't have existing relationship management infrastructures. a sanity test of whether or not a PKI is redundant and superfluous ... is if the digital certificate provides all the necessary information to the operating infrastructure (all entity information, all permissions, all access control, all authorization) w/o the operating infrastructure needing to reference any other information ... then the digital certificate, certification authority, and PKI isn't redundant and superfluous.

If the operating infrastructure simply uses information in a digital certificate to reference (the "real") repository of entity, permissions, authorizations, and/or access control ... then it is usually trivial to demonstrate that the digital certificate is redundant and superfluous (usually by showing that the public key can also be registered in such a respository). Showing that the digital certificate is redundant and superfluous ... then it will follow that the certification authority is redundant and superfluous and also PKI is redundant and superfluous.

What was new&important in computer architecture 10 years ago ?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was new&important in computer architecture 10 years ago ?
Newsgroups: comp.arch
Date: Fri, 12 Aug 2005 11:24:15 -0600
prep writes:
Clusters had been around for over 10 years then. Both Vaxen with VMS and Tops-20 CFS.

clusters date back to the 60s & 70s.

i worked on one of the largest clusters in the late 70s at the HONE system (which provided online infrastructure for all the US sales, marketing, and field people ... and was also cloned at a number of places around the world providing worldwide support for sales and marketing)
https://www.garlic.com/~lynn/subtopic.html#hone

but there were lots of others ... the airline control program for the airline res systems starting in the 60s and in the same timeframe, there were the also custom modified 360s for FAA air traffic control system.

i remember visiting nasa/houston in the late 60s (i think part of a share meeting held in houston spring of 68). they had five 360/75s in some sort of cluster providing ground support for missions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Anyone know whether VM/370 EDGAR is still available anywhere?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anyone know whether VM/370 EDGAR is still available anywhere?
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l
Date: Fri, 12 Aug 2005 14:12:42 -0600
Peter_Farley@ibm-main.lst (Farley, Peter x23353) writes:
Hi all,

I encountered a reference to the EDGAR editor today, which (it was said) would run under VM/370 R6.

Does anyone know of a source for EDGAR? Or is it lost in the mists of time?

TIA for your help.


The standard line-mode CMS editor was initially modified (for 3270) to still be a command-line operation ... but displayed full screen worth of data ... it then was enhanced to support full-screen input.

EDGAR was one of the first truely full-screen (3270) CMS editors ... allowing input, modifications and even commands to occur on various places of the screen.

a human factors war did develope between edgar and other full screen editors. in edgar, flat text file could be logically thot of a continuous scroll with the screen showing a window on the scroll. the start of the file was top and the end of the file was bottom.

edgar took a computer-centric point of reference and the notion that the up/down commands were with respect to the movement of the scroll in the window ... "up" moving the scroll up (and the "window" towards the end of the file) and "down" moving the scroll down (and the "window" towards the start of the file).

most of the other editors took a human-centric point of reference and the notion that the up/down commands were with respect to the human point of view ... i.e. "up" moved the window towards the top of the file (as if the person was looking up) and "down" moved the window towards the bottom of the file (as if the person was looking down).

note that tymshare was one of the early (cp67/vm370 based) online commercial time-sharing services
https://www.garlic.com/~lynn/submain.html#timeshare

in the 70s they opened up an online facility for SHARE members supporting online discussions. the complete online archive of those discussions can be found at:
http://vm.marist.edu/~vmshare/

some trivial edgar references:
http://vm.marist.edu/~vmshare/read.cgi?fn=1SP00A&ft=PROB&line=401
http://vm.marist.edu/~vmshare/read.cgi?fn=1SP00A&ft=PROB&line=188
http://vm.marist.edu/~vmshare/read.cgi?fn=3278PERF&ft=MEMO&line=1
http://vm.marist.edu/~vmshare/read.cgi?fn=XMASGIFT&ft=MEMO&line=7842
http://vm.marist.edu/~vmshare/read.cgi?fn=XEDFBACK&ft=NOTE&line=1
http://vm.marist.edu/~vmshare/read.cgi?fn=3270USRS&ft=PROB&line=24

following extract from Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist
Also in February, 1976, Release 3 of VM/370 became available, including VMCF and support for 3350s and the new ECPS microcode. Edgar (the ''Display Editing System''), a program product full-screen editor written by Bob Carroll, also came out in 1976. Edgar was the first full-screen editor IBM made available to customers, although customers had previously written and distributed full-screen editors themselves, and Lynn Wheeler and Ed Hendricks had both written full-screen editors for 2250s under CMS-67.

.....

with respect to the above ... i fiddled around a lot as undergradudate ... recent post to this n.g.
https://www.garlic.com/~lynn/2005n.html#40 Your might be a mainframer if ..

The univ. had a 2250m1 (direct channel attach to 360/67). Lincoln Labs had written a graphics support library for cms (targeted at various fortran applications). I hacked the Lincoln Labs CMS 2250 graphics library into the cms editor (I had also rewrote the cms editor syntax from scratch for inclusion into a OS/MVT18/HASP system ... providing cms editor syntax for HASP CRJE terminal support, I also wrote the HASP terminal device drivers).

for some drift ... various postings on ECPS
https://www.garlic.com/~lynn/submain.html#mcode

for some additional topic drift ... internal ibm had the largest such online (commercial?) time-sharing operation called hone
https://www.garlic.com/~lynn/subtopic.html#hone

in the late 70s, they consolidated all the US hone datacenters in cal. (actually not too far from tymshare's datacenter). it was a vm/370 based infrastructure that provided online support for all sales, marketing and field people (probably the largest single system image cluster computing complex of the period ... it was pushing 40k defined logins for US sales, marketing and field). The HONE system was also cloned at multiple datacenters around the world ... providing worldwide sales, marketing and field online dataprocessing services. slightly related recent comment
https://www.garlic.com/~lynn/2005n.html#44 What was new&important in computer architecture 10 years ago
https://www.garlic.com/~lynn/2005n.html#37 What was new&important in computer architecture 10 years ago
https://www.garlic.com/~lynn/2005n.html#38 What was new&important in computer architecture 10 years ago

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

seamonkey default browser on fedora/kde?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: seamonkey default browser on fedora/kde?
Newsgroups: netscape.public.mozilla.browser
Date: Fri, 12 Aug 2005 16:57:17 -0600
fedora has standard release firefox, thunderbird, and mozilla (1.7.11) distributions.

I also have nightly builds of firefox, thunderbird, mozilla suite 1.8 (from june) and seamonkey in /usr/local/ directories.

i'm trying to click on a URL in thunderbird invoke a tab in seamonkey.

in gnome, it creates entries in

~/.gconf/desktop/gnome/url-handlers/

for http there is a file

~/.gconf/desktop/gnome/url-handlers/http/%gconf.xml

that looks like:


<?xml version="1.0"?>
<gconf>
<entry name="command" mtime="1113668123" type="string">
                <stringvalue>mozilla %s</stringvalue>
</entry>
</gconf>

,,,,,

if mozilla isn't running and I click on a URL in thunderbird, it activates (system) mozilla 1.7.11 and the URL is loaded in a window.

if (/usr/local) mozilla 1.8 is already running and I click on a URL in thunderbird, the URL is loaded in the running mozilla.

if i edit the %gconf.xml file and change "mozilla %s" to "/usr/local/mozilla/mozilla %s" there is no effect even if i terminate thunderbird and restart it. If i log off and log back in (using KDE) ... apparently KDE picks up something from the gnome file and clicking on URL in thunderbird no longer works. If i change the %gconf.xml back to the original value ... it has no effect until i log off and log back in.

changing the value to seamonkey doesn't work. changing it to /usr/local/seamonkey/seamonkey doesn't work (with appropriate log off/on inbetween; whether or not seamonkey is already running or not when i click on the URL in thunderbird).

of course, it is possible to log on with gnome and there is a menu that provides for changing the $gconf.xml file.

however, there appears to be some sort of undocument interactions between thunderbird, kde, and gnome configuration ... in attempting to get thunderbird to bring up a URL in a currently running seamonkey.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Anyone know whether VM/370 EDGAR is still available anywhere?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Anyone know whether VM/370 EDGAR is still available anywhere?
Newsgroups: bit.listserv.ibm-main
Date: Fri, 12 Aug 2005 21:49:47 -0600
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
Wasn't HPO an add-on to VM/SP, not to VMF/370? If so, Edgar was well and truly obsolete long before HPO came out.

... previous posting (edgar ref 2/76 lifted from melinda's paper):
https://www.garlic.com/~lynn/2005n.html#45 Anyone know whether VM/370 EDGAR is still available anywhere?

basically i had done the resource manager (first charged for kernel code) in the vm370 mid-release 3 time-frame that was continued thru vm370 release 4. for vm370 release 5, SEP was created from my resource manager along with multiple shadow table support and some misc. other high performance stuff (again priced software). there was a less expensive add-on subset called bsepp ($150?/month instead of $1200?/month)

there was no vm370 release 7 ... instead the naming switched to vm/sp release 1 (aka "system product") ... sepp and bsepp were merged back into the base system ... and the whole thing was charged for (as opposed to the based being free with the extra stuff being charged for).

misc. past postings about 6/23/69 unbundling announcement, progression of charging for application software, my resource manager being the first charged for kernel code, etc
https://www.garlic.com/~lynn/submain.html#unbundle

lots of past postings related to resource management ... starting with doing dynamic adaptive scheduling for cp67 as an undergraduate (acquiring the label fairshare scheduling ... because the default dynamic adaptive policy was fairshare).
https://www.garlic.com/~lynn/subtopic.html#fairshare

and some related work on paging algorithms that were also part of the resource manager
https://www.garlic.com/~lynn/subtopic.html#wsclock

one of the issues was in establishing the policies for pricing for kernel software ... supposedly if the kernel software involved direct hardware support, it was free ... but addons ... like better resource management could be priced.

however, as part of the vm370 release 3 resource management ... I had included a lot of code restructuring the kernel for better integrity as well as multiprocessing support ... work that i had done while working on a 5-way project that was never announced/release
https://www.garlic.com/~lynn/submain.html#bounce

when it was decided to ship smp/multiprocessor support as part of the base vm370 release4 ... there was something of a problem. SMP support was obviously "free" because it was basic hardware support ... however the implementation was dependent on a lot of restructuring code that I had included in the resource manager ... which was priced.

the result was all of the extra code was removed from the vm370 release 4 resource manager and merged into the base (free) kernel. misc. other smp related postings:
https://www.garlic.com/~lynn/subtopic.html#smp

VM/SP and HPO refs from melinda's paper
https://www.leeandmelindavarian.com/Melinda#VMHist
Beginning in the 1980s, as the number of VM installations grew dramatically, we began to see the birth of firms devoted to producing VM systems and applications software. The founders of this ''cottage industry'' were, for the most part, long-time VM gurus from customer shops and IBM, who knew from first-hand experience what function VM needed to make it a complete production system. They set about supplying that function commercially, thus enabling new VM installations to get started with substantially less expertise and initial investment than had been required earlier. At the same time, we started seeing the results of IBM's new commitment to VM. VM System Product Release 1 came out late in 1980. VM/SP1 combined all the [B]SEPP function into the new base and added an amazing collection of new function (amounting to more than 100,000 lines of new code): XEDIT, EXEC 2, IUCV, MIH, SUBCOM, MP support, and more. There can be no question that by releasing XEDIT in 1980, IBM gave CMS a new lease on life. Within no time, programmers and end users were building large, sophisticated applications based entirely on XEDIT, stretching it to its limits and doing things with it that IBM had never envisioned. That they were able to do that was a tribute to XEDIT's author, Xavier de Lamberterie. (If you've ever wondered where the ''X'' in ''XEDIT'' came from, now you know---it was Xavier here.)

...
In October, 1981, IBM announced the System/370 Extended Architecture (XA). At the same time, it announced a rudimentary XA version of VM, the VM/XA Migration Aid. The Migration Aid was based on the VM Tool built for use by the MVS/XA developers. At the time of the XA announcement, the VM/XA Migration Aid clearly was not considered to be ready for customer use; its general availability date was announced for twenty-six months later. On that same day in October, 1981, IBM announced the first three releases of the VM High Performance Option (HPO), a new product that would be shipped as updates to VM/SP to enhance it to support high-end S/370 processors running in 370 mode. All three flavors of VM were to grow and sometimes prosper throughout the 1980s. The details are probably familiar to you all, so I will touch on only a few of the highlights (and lowlights) of that decade.

.....

for some additional drift ... REX(X) was just starting in '79. I wanted to demo the power of REX. VM had a problem determination & dump analysis package written in large amount of assembler. I asserted that I was going to write a replacement in less than 3 months elapsed time that had ten times the function and ran ten times faster. minor reference:
https://www.garlic.com/~lynn/submain.html#dumprx

as to the large increase in the number of installations ... there have been numerous discussions about the mid-range market exploding in this time frame for both vm370 4341s as well as vax machines.

some recent postings on the mid-range market explosion in the late 70s and early 80s:
https://www.garlic.com/~lynn/2005n.html#10 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#11 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#12 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#16 Code density and performance?

note also the internal network was also growing fast ... it had been larger than the arpanet/internet from just about the beginning until sometime mid-85.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

at the great change-over from IMPs & host protocol to internetwork protocol on 1/1/83, the arpanet/internet had approx. 250 nodes. By comparison the internal network was quickly approaching 1000 nodes, which it passed a little later the same year.
https://www.garlic.com/~lynn/internet.htm#22

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Good System Architecture Sites?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Good System Architecture Sites?
Newsgroups: comp.arch
Date: Fri, 12 Aug 2005 23:03:03 -0600
"Del Cecchi" writes:
There is a tremendous amount of computer architecture information online at the IBM publications site and the IBM redbook site. Might not be what you are looking for however.

Actually the origin and evolution of the 360 and follow-ons would make a pretty interesting study. And Lynn Wheeler saw most of it. A lot of the old manuals and stuff are still around.


besides the ibm sites there are sites that are scanning old manuals.
http://www.bitsavers.org/pdf/
http://www.bitsavers.org/pdf/ibm/
http://www.bitsavers.org/pdf/ibm/360/
http://www.bitsavers.org/pdf/ibm/370/

the above has both old 360 and 370 principles of operation ... which give detailed description of instructions and machine operation.

what hasn't shown up is the architecture redbook (not relationship to redbooks published for customer consumption). starting in the late 60s or so, this was done in cms script and was about twice the size of the principles of operation .... script command invokation controlled whether the full architecture redbook was output or just the principles of operation subset was output. the redbook had all sort of engineering notes, background justification for instructions, various trade-off considerations, etc.

a more recent copy of the principles of operation is the Z/architecture principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/CCONTENTS?SHELF=DZ9ZBK03&DN=SA22-7832-03&DT=20040504121320

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X509 digital certificate for offline solution

From: <lynn@garlic.com>
Newsgroups: microsoft.public.dotnet.security
Subject: Re: X509 digital certificate for offline solution
Date: Sat, 13 Aug 2005 09:42:41 -0700
Valery Pryamikov wrote:
So, my point is that it is very good to argue against standards (and PKI particularly) in academic groups - that will ensure that newer and better standards will be developed later-on. But it is different thing to advice against using standards in programmers groups without providing proven and feasible alternatives, because that could negatively affect security of future software systems.

the issue isn't arguing against standards ... the issue is explaining the fundamental design point of tools ... so if you are finding it difficult to tighten a nut with a hammer ... you understand that the hammer was met for hammering nails ... not tightening bolts.

i've seen numerous situations where somebody esposes PKI as the answer before even knowing what the problem is.

A trivial scenario is that there are lots of standards & actual deployments involving digital signatures w/o certificates; in fact, one can claim that the fundamental underpinnings of PKIs involve layering certificates on top of underlying digital signature standards (that are certificate-less).

If you are using tools ... it can assist if you understand the fundamental functioning of the tools. basic digital signature standards (w/o certificates) allow (something you have) authentication between two parties.

certificates & PKI were layered on top of the underlying digital signature standard ... to address the scenario with first time communication between strangers.

There have been attempts to expand digital certificate use so that they were required for all authentication. This resulted in lots of hype and promise ... but also significant numbers of unsuccessful deployments. I claim that the unsuccessful deployments weren't short-comings in the technology ... it was attempting to use a tool for something that it wasn't intended to be used for. Putting it another way ... there were fundamental business process conflicts attempting to use a tool for something it was never designed to do (like using a hammer to tighten a nut).

the original question implied a question about the business process applicability of using digital signature in offline situations... as opposed to the environment they have been used to. understanding the nature of the tool can help in understanding its applicability to different situations.

recent related postings from cryptography mailing list.
https://www.garlic.com/~lynn/aadsm20.htm#29 How much for a DoD X.509 certificate?
https://www.garlic.com/~lynn/aadsm20.htm#30 How much for a DoD X.509 certificate?
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love

APL, J or K?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: APL, J or K?
Newsgroups: comp.databases.theory
Date: Sat, 13 Aug 2005 12:57:47 -0600
"Marshall Spight" writes:
Anyone here have any experience with the APL, J or K programming languages? (Yes, I recognize the redundancy in "APL programming language.") What about with Kdb?

They look fairly interesting, if somewhat thrown together. It doesn't appear that any of them are examples of great design, but the underlying math seems quite interesting.


at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

we took the apl\360 (real memory swapping) monitor from phili science center ... and ported it to cms virtual memory environment. also added semantics for cms\apl to invoke system functions. this last caused some amount of heart burn with the original theorists ... which wasn't resolve until shared variables were introduced to supplant the cms\apl implementation for accessing system functions.

some large numbe of past postings on apl &/or a major deployed application service environment based on apl
https://www.garlic.com/~lynn/subtopic.html#hone

for a little related topic drift ... later got to do some work on the original relational/sql implementation at sjr ... also done in cms virtual memory environment
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IPSEC and user vs machine authentication

Refed: **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: muc.lists.netbsd.tech.net
Subject: Re: IPSEC and user vs machine authentication
Date: Sat, 13 Aug 2005 12:43:34 -0700
Daniel Carosone wrote:
Yes; I probably prefer the agent model in general, and it's the only one that will be practical for PKCS#11 smartcards or similar pki tokens, and for password prompts for XAUTH and OTP tokens.

You're absolutely right that this fits very well with Kerberos identity handling, and that's pretty much what I want even without the certificate or tokencode cases.

In the Kerberos model, I see a login by the user to a racoon (kernel) helper service, which then grants the service a delegated authentication for IKE (NFS, SMB) participation on the users behalf via a forwardable or proxiable ticket. In practice this probably requires the user running a program that looks very much like an agent anyway.

(As an aside, such a service/agent is probably a better model for the user's credential cache than dotfiles, if it implements better access control policy protecting the identity from abuse by unauthorised programs. I wish ssh-agent did so.)


kerberos basically provides for trusted information about the entity ... after the entity has first authenticated themsevles to kerberos. originally this was via userid/password (where the userid can be used to associate/bind various kinds of permissions, access control, etc).
https://www.garlic.com/~lynn/subpubkey.html#kerberos

kerberos pk-init initially extended this to register public keys in lieu of passwords and use digital signature verification for something you have authentication ... aka the entity has access and use of the corresponding private key.
https://www.garlic.com/~lynn/subpubkey.html#certless

later digital certificate support was added to pk-init ... as a means for entity determiniation in lieu of registering a userid and having a binding of permissions, authorization and access control. however, many digital certificates just include userid and public key ... and not the actual permission, authorization and access control binding ... making it trivial to demonstrate that the digital certificates are redundant and superfluous compared to the original direct registration of the entities public key. the other implication that such digital certificates are redundant and superfluos is that they don't achieve the original design point for digital certificates ... containing all information necessary to perform some operation (not only the authentication but also the permissions).

there have been presentation at grid/globus meetings where they've talked about both radius as well as kerberos as an entity management infrastructure ... both raidus and kerberos originally having been userid/password implementation paradigms ... aka the userid which provides an anchor for binding permissions, authorizations, access control, journalling, tracking, etc ... and the password for authentication.

both radius and kerberos have had implementations that did simple upgrade of the password registration to public key registration and replaced password matching with digital signature verification
https://www.garlic.com/~lynn/subpubkey.html#radius

both radius and kerberos have also had implementations where digital certificate were introduced as the binding paradigm ... where the digital certificates carry the permissions, authorizations, access control, etc.

However, there have also been some number of deployments which retain the entity management characteristics of the original infrastructure and the digital certificates just carry some sort of userid and public key. For these infrastructures it is trivial to demonstrate that the digital certificates are redundant and superfluous compared to just directly registering the public key.

part of this dates back to the early 90s with x.509 identity certificates and certification authorities somewhat in the dark regarding what information future relying parties might find useful ... and some direction to just grossly overload such certificates with enormous amounts of personal information.

some number of institutions by the mid-90s were starting to realize that such x.509 identity certificates, grossly overloaded with enormous amounts of personal information, represented significant privacy and liability issues. as a result there was a retrenchment to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

which basically just contained some sort of identifier index into some kind of entity/relationship management repository (containing all the real information) and the public key. however, it has been trivial to demonstrate that such relying-party-only certificates are redundant and superfluous since the initial contact also provides the index into the entity/relationship management repository (containing authorizations, permissions, access control, basis for logging/journaling activity, etc). also since the relying party also is the registration party ... the public key was registered in some flavor the institutions entity/relationship management repository.

possibly semantic confusion related to calling things identity management infrastructures contribute to the problem. for the most part the entity/relationship management infrastructure don't manage identities, they manage permissions. the early days of cross-domain kerberos protocol in the late 80s & early 90s talked about how it was going to be able to allow a ford motor employee to access GM dataprocessing facilities. it wasn't so much that GM didn't trust Ford to correctly authenticate their own employees ... it is what possible GM permissions could you imagine Ford being allowed to specify.

ARP routing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ARP routing
Newsgroups: comp.protocols.tcp-ip
Date: Sat, 13 Aug 2005 14:06:14 -0600
Andrew - Supernews <andrew+nonews@supernews.com> writes:
In as much as the OSI model corresponds to IP (a correspondence which is not particularly close, but good enough for now), layer 3 (network) corresponds to the IP protocol, so an IP router is a layer-3 device. Your description above therefore fails to draw any distinction :-)

however, that touches some sensitive points.

basically, OSI model is 60s/70s scenario with homogeneous networking ... w/o the introduction of internetworking protocol and gateways (which can be considered a characteristic of being able to deploy inter-networking). in the late 80s and early 90s, numerous govs (including US) were mandating that tcp/ip be eliminated and replaced by osi.

this was further aggravated by ISO standards body ... which had directive that no standards work could be done for protocols that violated osi.

there was a work item submitted to ansi x3s3.3 (us iso chartered standards body responsible for protocols related to iso level 3 and 4) for high-speed protocol (HSP). it basically would go from level 4/5 (transport) interface directly to mac/lan interface.

this violated osi by

1) going directly from level 4/5 (transport) interface to max/lan interface skipped level 3/4 interface, violating osi

2) lan/mac interface basically corresponds to approximately the middle of level 3, networking ... which violates OSI. interfacing to the lan/mac interface therefor also violates OSI.

3) HSP would support internetworking ... since internetworking prtocol doesn't exist in OSI ... supporting internetworking protocol violates OSI.

misc. related past posts:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

the importance of internetworking (& corollary gateways) shouldn't be discounted.

the arpanet during the 70s basically had osi-like homogeneous networking flavor until the great switch-over to internetworking protocol on 1/1/83 (and acquired gateways functionality as part of the internetworking architecture chacteristics providing the "inter" layer between networks).

I've frequently asserted that the reason that the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than the arpanet/internet from just about the start into possibly mid-85 ... was because the nodes in the internal network had a type of gateway functionallity from the start (which the internet didn't get until 1/1/83). at about the time of the switch-over to internetworking on 1/1/83 the internet had approximately 250 nodes ... however the internal network was much larger than that ... passing 1000 nodes later that year
https://www.garlic.com/~lynn/internet.htm#22

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/




previous, next index - home