List of Archived Posts

2005 Newsgroup Postings (10/13 - 11/05)

MVCIN instruction
Type A ,Type B
Internet today -- what's left for hobbiests
Flat Query
Flat Query
Internet today -- what's left for hobbiests
Flat Query
Performance of zOS guest
Flat Query
Flat Query
NEW USA FFIES Guidance
Type A ,Type B
Flat Query
Internet today -- what's left for hobbiests
Flat Query
Data Encryption Standard Today
Is a Hurricane about to hit IBM ?
winscape?
winscape?
MVCIN instruction
MVCIN instruction
MVCIN instruction
MVCIN instruction
winscape?
What ever happened to Tandem and NonStop OS ?
MVCIN instruction
IEH/IEB/... names?
whats the difference between Encryption and Encoding ? or both are same ?
MVCIN instruction
IBM 3330
Internet today -- what's left for hobbiests
MVCIN instruction
Random Access Tape?
Power5 and Cell, new issue of IBM Journal of R&D
Power5 and Cell, new issue of IBM Journal of R&D
Filemode 7-9?
Filemode 7-9?
Von Neumann machines. The key to space and much else
MVCIN instruction
Filemode 7-9?
Filemode 7-9?
Random Access Tape?
feasibility of certificate based login (PKI) w/o real smart card
P2P Authentication
winscape?
winscape?
Various kinds of System reloads
Gartner: Stop Outsourcing Now
Gartner: Stop Outsourcing Now
phishing web sites using self-signed certs
Various kinds of System reloads
phishing web sites using self-signed certs
TTP and KCM

MVCIN instruction

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Thu, 13 Oct 2005 22:03:04 -0600
Newsgroups: bit.listserv.ibm-main
Ed Gould wrote:
I still remember in those days the POPS for each model had the " micro seconds" each instruction took. My memory is sketchy and after the document was read you had X (I don't remember) many microseconds to make a decision as to which bin you could send the document to.

The POPS indicated how many ms each instruction took. Any time a decision was to be made we had to calculate a total for the path length so we didn't miss the window. Its been ages so I don't remember a lot but I don't remember when you decided to send it to bin 0 (example) how long it took for the selection to happen. I am sure it was documented somewhere, I just don't remember as it was 30+ years ago.

Maybe someone with a better memory can pipe up with more explicit detail than I can provide.

I still cannot believe that IBM would come up with a machine like that. That was the only machine that I ever worked on that the timimg was so critical.


home page (also lists some of the shadows)
http://www.bitsavers.org/

scanned manuals
http://www.bitsavers.org/pdf/

here are some old 360 manuals that have been scanned
http://www.bitsavers.org/pdf/ibm/360/

instruction timings were in functional characteristics
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

and 360/30 functional characteristics
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/GA24-3231-7_360-30_funcChar.pdf

how 'bout 2540 component description
http://www.bitsavers.org/pdf/ibm/25xx/A21-9033-1_2540_CompDescr.pdf

360 instruction timing
http://www.bitsavers.org/pdf/ibm/360/A22_6825-1_360instrTiming.pdf

360/30 channel characteristics
http://www.bitsavers.org/pdf/ibm/360/fe/2030/A24-3411-1_360-30_chanChar.pdf

Type A ,Type B

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Type A ,Type B
Newsgroups: comp.protocols.tcp-ip
Date: Fri, 14 Oct 2005 08:14:32 -0600
Barry Margolin writes:
That's about the ISO Transport Protocol. Does anyone really use that? It's *not* TCP/IP -- it's a protocol suite that was intended to replace TCP/IP networking, but never caught on widely.

ISO had a few problems ... ISO and ISO chartered standards organizations had guidelines that only networking standards work that conformed to OSI model could be done. very late 80s, was somewhat involved in trying to interest ANSI x3s3.3 (US ISO chartered organization for standards in the area of OSI level 3&4 ... network & transport) interested in working on HSP (high-speed protocol).

it was turned down ... in part:

1) HSP would go directly from level 4/5 interface to LAN/MAC interface, bypassing the level 3/4 interface (network/transport). this violated OSI model ... and so couldn't be worked on.

2) HSP would support internetworking protocol ... i.e. IP. OSI model doesn't contain an internetworking layer, supporting IP violated OSI model and therefor couldn't be worked on.

3) HSP would go directly to the LAN/MAC inteface. LAN/MAC interface corresponds approx. to somewhere in the middle of OSI layer 3 (networking) ... and violated OSI model, therefor anything supporting LANs also violated OSI model and couldn't be worked on.

however, ISO was also mandated by some govs (including US federal) that tcp/ip network would be eliminated and be replaced by ISO/OSI network.

misc. collected HSP and/or OSI postings
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

for some additional reference ... my rfc index (frames version)
https://www.garlic.com/~lynn/rfcietff.htm

in RFCs listed by section, select Term (term->RFC#)

and in the Acronym fastpath section select possibly "ISO8072", "ISO8073", "ISO8473", "ISO8879", "ISO", "ITU" and/or "ITU-T" (for part of its life, international telecommunication standards were "ITU"), i.e.
International Organization for Standardization (ISO)
3745 3629 3563 3163 2781 2556 2503 2279 2126 2044 2030 1888 1859 1815 1781 1698 1632 1629 1575 1574 1564 1561 1554 1485 1484 1418 1377 1330 1327 1283 1277 1240 1238 1237 1223 1214 1195 1169 1165 1162 1161 1148 1142 1139 1138 1086 1085 1070 1069 1039 1008 1007 1006 995 994 986 983 982 941 926 905 892


clicking on any RFC numbers, brings up that RFC summary in the lower frame. examp:
892 -
ISO Transport Protocol specification [Draft], International Organization for Standardization, 1983/12/01 (82pp) (.txt=158151) (Obsoleted by 905)


in rfc summary, clicking on the ".txt=nnnn" field retrieves the actual RFC. also ...
905
ISO Transport Protocol specification ISO DP 8073, McKenzie A., 1984/04/01 (154pp) (.txt=249214) (Obsoletes 892) (Ref'ed By 1191, 1583, 1700, 1981, 2178, 2328, 2642, 2740, 2896)


obviously, IETF didn't reciprocate ... even tho, internetworking protocol violated OSI model and therefor couldn't be worked on by any ISO organization ... OSI didn't violate any IETF model.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Internet today -- what's left for hobbiests

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Internet today -- what's left for hobbiests
Newsgroups: alt.folklore.computers,alt.cyberpunk
Date: Fri, 14 Oct 2005 09:48:07 -0600
some of it is similar to automobiles, ... in the 40s and 50s, there was a large percentage of individuals (with automobiles) doing their own work ... after market modifications, service, etc. in the 70s & 80s, automobiles became much more of a consumer appliance. even now, you can still find huge amount of aftermarket autombile work that goes on ... but it may not be as visible ... not because there is so much less ... just that the consumer appliance sector is a lot more vislble.

as an aside issue the automobile transition from significant consumer effort to much more a consumer appliance somewhat coincides with the growth of foreign autombile sales (that were considered much more reliable and higher quality). this uptake of foreign automobiles by the consumers ... then led to import quotas. in the 70s, there was a newspaper article (washington post?, i remember running across it on some business trip) regarding the american automobile industry reaction to the quotas. supposedly the quotas were to give the american automobile industry breathing room and additional profits in order to totally remake themselves into more competitive operations. the article claimed that instead, most of the additional profits were going to salaries, bonuses, and dividends ... and therefor should be taxed at 100percent as unearned profits.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Fri, 14 Oct 2005 10:27:56 -0600
"Mark D Powell" writes:
The term 'flat file' has been in use for more than 20 years. It was generally applied to files accessed in a sequential manner as opposed to being accessed via direct access or ISAM, Indexed Sequential Acess Method, files. Over time, especially, in the non-mainframe world the term was usually used a synonym for a text file.

or no-index and/or (physical) no-structure ... other then sequential set of bits (modulo sequential record boundaries).

however, there were some work on things like flat files having an implied structure ... like sorted records ... and entries were found by doing binary searches (taking advantage of the implied sorted record structure and file system that structure that would support reading random records from file ... in much the same way that disks allow reading random records ... as opposed to tape which has been strictly sequential).

there were some resources side-tracked from the system/r activity (original relational/sql implementation)
https://www.garlic.com/~lynn/submain.html#systemr

for an internal, online phone book. this was done as a large (sorted record) flat file (over 25 years ago). there was work done comparing binary search to radix search i.e. rather than treating records as consisting of totally random bits for a binary search ... pro-rate the search probes based on letter sequence of the search argument. initially assuming an uniform letter frequency distribution. this was further refined by supplying the phone book search program with the actual first letter frequency distribution of names in the phone book.

binary search assumes that avg. search probes is the binary root of the size of the file ... i.e. 64k records requires 16 probes. letter frequency radix search reduced that to closer to five probes.

translation to unix filesystem was done assuming avg. record size. mainframe filesystems supported the concept of records ... and API semantics that allowed reading random records (both fixed-length records ... which as simpler case as well as variable-length records ... which is a little more difficult). unix filesystem API basically allow reading random bytes from a file. record structures are a characteristic of implicit in-band data in the file (i.e. null terminated) as opposed to explicit out-of-band information directly supported by the filesystem. As a result, the letter frequency radix search had to perform a little magic simulating random record reads on top of an (unix) API that provided only simple random character reads.

one might also consider this helped contribute to lots of databases being implemented on unix platforms with raw disk ... instead of thru the filesystem .... since raw disk at least provided the record oriented api semantics (which got quite obfuscated in a unix filesystem environment).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Fri, 14 Oct 2005 10:44:52 -0600
oh yes, part of the transition to explicit dbms index structures ... basically was when updates & rebuild of a complete sorted flat file became more expensive than the disk space overhead (and associated processing) for explicit indexes that allowed doing incremental updates on the fly (w/o having to rebuild the complete sorted file after each batch of updates).

... oh, and the phone book letter frequency radix search was actually better than the 16-to-5 probe comparison. the phone book was organized using physical 4k records with an avg. of approx. 50 phone records per physical record. the letter frequency radix search could frequently pick the related physical record on the first probe.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Internet today -- what's left for hobbiests

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Internet today -- what's left for hobbiests
Newsgroups: alt.folklore.computers
Date: Fri, 14 Oct 2005 13:27:08 -0600
et472@FreeNet.Carleton.CA (Michael Black) writes:
The sad thing is that so much of what's new on the internet has become branded. Instead of someone cooking up something, generating an RFC, and it becoming a widespread protocol, things are started by companies and branded from the beginning. I saw an issue of Technology Review from some months ago, and it was dedicated to "community" on the internet, but instead the old cooperative and communal spaces, they go through Blog Inc, Craigslist, Freecycling, Flicker, yahoo groups, etc. Half the time, it seems like ISPs don't even issue email accounts and webspace, or at least that's what you'd gather from people wanting gmail accounts and some place where they can put their webpages for free.

furthermore, isoc/ietf/rfc latest/current publication standards now allow for rfc authors to retain copyright rights ... and all new RFCs have statements about referring to the appropriate standards document as to copyright rules.

aka previously RFC specified that contributer granted unlimited perpetual, non-exclusive ... etc rights to ISOC and then allowed the information to be be used in derivative works

previously rules as specified in rfc2026
10.3.1. All Contributions

By submission of a contribution, each person actually submitting the contribution is deemed to agree to the following terms and conditions on his own behalf, on behalf of the organization (if any) he represents and on behalf of the owners of any propriety rights in the contribution.. Where a submission identifies contributors in addition to the contributor(s) who provide the actual submission, the actual submitter(s) represent that each other named contributor was made aware of and agreed to accept the same terms and conditions on his own behalf, on behalf of any organization he may represent and any known owner of any proprietary rights in the contribution.

l. Some works (e.g. works of the U.S. Government) are not subject to copyright. However, to the extent that the submission is or may be subject to copyright, the contributor, the organization he represents (if any) and the owners of any proprietary rights in the contribution, grant an unlimited perpetual, non-exclusive, royalty-free, world-wide right and license to the ISOC and the IETF under any copyrights in the contribution. This license includes the right to copy, publish and distribute the contribution in any way, and to prepare derivative works that are based on or incorporate all or part of the contribution, the license to such derivative works to be of the same scope as the license of the original contribution.

2. The contributor acknowledges that the ISOC and IETF have no duty to publish or otherwise use or disseminate any contribution.

3. The contributor grants permission to reference the name(s) and address(es) of the contributor(s) and of the organization(s) he represents (if any).


... snip ... and then carried in every RFC
Full Copyright Statement

Copyright (C) The Internet Society (xxxx). All Rights Reserved.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.


... snip ...

in theory, this posting is a derivative work of rfc2026 ... carrying the above extract from rfc2026 ... and therefor is also required to carry the full copyright statement ... as above.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Fri, 14 Oct 2005 13:33:45 -0600
"David Cressey" writes:
Actually if you go back 25 years or more, the term referred to files that not only were processed sequentially, but also that did not contain records within records, or records grouped into record groups.

definitely true of the strong tape heritage enforcing sequential access ... however as files from tape started showing up on disks (or dasd ... direct access storage device) ... which could be randomly accessed .... you did start to see sorted files that were being queried using techniques like binary search.

one of the things transition/migration to more structure was that update & rebuild of complete file didn't scale. past a certain point the cost of a complete sort & file rebuild was more than the overhead of infrastructure (indexes and other processing) that allowed for incremental updates w/o having to rebuild the complete file every time.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Performance of zOS guest

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Performance of zOS guest.
Newsgroups: bit.listserv.vmesa-l
Date: Fri, 14 Oct 2005 13:50:58 -0600
Bill Bitner writes:
Yes. The one thing to remember there is INDICATE USER gives a snapshot of running counters. In most cases, I would use two invocations of the INDICATE USER and take deltas.

common folklore is vtime is what the guest would do on the bare machine w/o vm. however, there have been some exceptions ... original vs/1 (and earlier mvt) handshaking basically offloaded some number of functions out of the guest operating system to VM ... not just because it eliminated processing duplication ... but in some cases, the vm kernel was actually significantly more efficient at performing the function than what was implemented by the guest operating system.

later there was vm kernel operation restructuring for tpf & 3081. at the time tpf (airline control program) didn't have shared-memory multiprocessor support (and 3083 hadn't been retrofitted to 308x line, originally 308x was never to have a uniprocessor offering). nominally the vm kernel overhead ran serially with guest operation ... however running 3081 multiprocessor with a single tpf guest ... resulted in one processor being idle. the vm kernel was restructured to introduce a queueing and signalling mechanism for some number of operations that vm must do on behalf of the virtual machine. in some number of cases, the increased the absolute processing cycles (because of the addditional queueing and signalling) ... but if it turned out to be the single tpf guest case on a 3081 ... some amount of the vm kernel processing could now go on in parallel and asynchronously on the 2nd processor and allowed the vm kernel to more quickly return control to the guest operating system.

if the customer was already running all processors at max ... the restructuring done in that release ... degraded overall thruput ... but for the single tpf guest case (not having smp support) on a two-processor 3081, it allowed for overlapped processing on the otherwise idle 2nd processor.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Fri, 14 Oct 2005 13:57:53 -0600
"David Cressey" writes:
Only if the record's addresses could be computed (or pointed to). In general, the only kinds of unindexed files whose record address was computable were fixed length records. And in general, fixed lentgth records corresponded to flat files.

Sure you can come up with exceptions. But i'm describing the general scenario in which that language gained usage.


lots of files were structured as fixed length records specifically for that reason ... however there were also tricks with variable blocked file type ... where there was some filesystem out-of-band infrastructure support that minimized having to perform sequential reads to do random query against a variable length record file.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Fri, 14 Oct 2005 14:12:46 -0600
"David Cressey" writes:
Only if the record's addresses could be computed (or pointed to). In general, the only kinds of unindexed files whose record address was computable were fixed length records. And in general, fixed lentgth records corresponded to flat files.

Sure you can come up with exceptions. But i'm describing the general scenario in which that language gained usage.


it was also somewhat the battle in the 70s that went on between the stl 60s physical database people ... and the sjr system/r people ... original relational/sql
https://www.garlic.com/~lynn/submain.html#systemr

i.e. the phsycial database ... had records linked to other records ... where the linking was done by physical record pointers that were fields that were part of the record data ... these weren't traditional flat file ... but record location semantics was exposed.

one of the points of system/r effort was to abstract away the reoord pointers ... by using indexing. the stl people claimed that system/r doubled the physical disk space (for the indexes) along with significant increase in processing overhead ... associated with all the index processing gorp. the sjr people claimed that system/r eliminated a lot of human effort that went into administrative and rebuilding efforts associated with the embedded record pointers. I did some of the system/r code ... but i also did some of the code for various other projects ... so I wasn't particularly on one side or anther.

the 80s saw 1) big demand increase for online information ... putting pressure on scarce database people resources, 2) significant increase in physical disk space and decrease in price/bit, and 3) large increase in processor memory that could be used for caching indexes.

The disk technology change drastically reduced the perceived cost of the extra index structures ... and the significant processor memory increseas allowed significant caching ... which in turn reduced the perceived overhead of index processing.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

NEW USA FFIES Guidance

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: NEW USA FFIES Guidance
Date: Fri, 14 Oct 2005 13:37:52 -0700
To: n3td3v <n3td3v@googlegroups.com>
ref:
https://www.garlic.com/~lynn/2005r.html#54 NEW USA FFIES Guidance

somewhat as an aside ... the x.509 identity certificate activity from the late 80s and early 90s wasn't the only standards activity that appeared to get some things confused ... i.e.

1) lets allow for x.509 identity certificates to be grossly overloaded with personal information and then propose that ALL authentication events convert to digital signature with mandated appended x.509 identity certificates ... turning all authentication events (even the most trivial, efficient operations) into heavy-weight identification operations

2) define a non-repudiation bit that takes on quantum-like, time-travel, and mind-reading characteristics ... since the bit was set in the past and applies to all future digital signatures it needed time-travel. it also needed to be able to mind-read in order to know that the human had read, understood, agreed, approved, and/or authorized what was digitally signed (as distinct from signing purely random bits as part of a simple authentication protocol). finally, it had to have quantum characteristics ... since there was no proof in any of the protocols as to what digital certificate that was actually appended to any digitally signed message ... it was up to the non-repudiation bit in the digital certificate to know when the relying party was using the digital certificate in conjunction with a simple authentication event (not implying human signature) as opposed to a human signature event (implying read, understood, agreed, approved and/or authorized). In any case, the non-repudiation bit was then required to take on the value as appropriate for the kind of digital signature it was being used in conjunction with. that also sort of implies that the certification authorities digital signature applied to the digital certificate be able to support quantum-like characteristics ... in support of the quantum-like characteristics of the non-repudiation bit (the value of the bit is not determined until the digital signature that it is being used in conjunction with is established).

misc. collected posts on human & "e" signatures
https://www.garlic.com/~lynn/subpubkey.html#signature

in any case, iso was also having an interesting time with the osi model ... recent posting about iso not allowing standards work on stuff that violated the osi model ... things like internetworking protocol or local area networks.
https://www.garlic.com/~lynn/2005s.html#1 Type A , Type B

further topic drift ... one of the sources for the merged security glossary and taxonomy
https://www.garlic.com/~lynn/index.html#glosnotes

has been
http://www.ffiec.gov/ffiecinfobase/html_pages/gl_01.html

Type A ,Type B

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Type A ,Type B
Newsgroups: comp.protocols.tcp-ip
Date: Fri, 14 Oct 2005 18:53:14 -0600
James Carlson writes:
The real question, I think, is why the original poster is reading this particular RFC.

What problem is he trying to solve that involves RFC 892?


as indicated in previous post ... somewhat minor nit; rfc892 was a draft dec. 1983 ... and was obsoleted in apr. 1984 by rfc905 if nothing else, all references should be to rfc905 rather than rfc892.

also ... the notes from 905 would imply that it might be inappropriate to ask any question related to the matter in any sort of tcp/ip forum.

from 905:
ISO Transport Protocol Specification ISO DP 8073

Status of this Memo:

This document is distributed as an RFC for information only. It does not specify a standard for the ARPA-Internet.

Notes:

1) RFC 892 is an older version of the ISO Transport Protocol Specification. Therefore this RFC should be assumed to supercede RFC 892.

2) This document has been prepared by retyping the text of ISO/TC97/SC16/N1576 and then applying proposed editorial corrections contained in ISO/TC97/SC16/N1695. These two documents, taken together, are undergoing voting within ISO as a Draft International Standard (DIS).


... snip ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Sat, 15 Oct 2005 10:37:04 -0600
"David Cressey" writes:
Good Summary.

About the only place I still see the argument between exposed pointers and indexes is... right here in the comp.databases.theory newsgroup, where our resident gadfly is still trying to persuade us to go back to pointers, and start over!

Branching off on a tangent... By collecting all the pointers in indexes, and putting them under control of a subsystem of the DBMS, it becomes possible to move a table (perhaps to another disk), and update all the pointers in the indexes that need it.

By contrast, in the World wide web, there is, in general, no way of knowing how many hyperlinks will be broken if an object is moved from one URL to another, or how to fix them. People seem to be willing to live with this decifiency, but I suspect that their patience will eventually run out.


referencing previous post:
https://www.garlic.com/~lynn/2005s.html#9 Flat Query

i wonder ... do i have on archived post on this topic from last decade
https://www.garlic.com/~lynn/94.html#26 Misc. more on bidirectional links

except this involved network database where all the links/pointers were implemented as indexes (abstracted pointers into indexes somewhat analogous to what was done by system/r ... the original implementation was going on concurrently with the system/r implementation on the same system platform) ... and enforced bidirectional "connections", getting referential integrity ... and also addressed the www unidirectional issue.

some minor historical regression ...

the html stuff traces back to waterloo's script implementation ... aka cern was a vm/cms shop ... and waterloo's script is clone of the cms script document formating command done at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

in fact, cms used to stand for cambridge monitor system ... before it was renamed conversational monitor system. gml was invented at the science center in 69 and support added to script command (aka gml is from "G", "M", and "L", the three inventors ... then had to come up with the markup language part):
https://www.garlic.com/~lynn/submain.html#sgml

and system/r
https://www.garlic.com/~lynn/submain.html#systemr

was also a vm/cms implementation.

and of course, hyperlink stuff traces back to Nelson's xanadu
http://www.xanadu.net/

how about: WWW, what went wrong
http://xanadu.com.au/xanadu/6w-paper.html

and Engelbart's nls/augment
http://sloan.stanford.edu/MouseSite/dce-bio.htm

from above ...
In 1977 Tymshare bought the commercial rights to NLS, renamed it AUGMENT, and set it up as a principal line of business in a newly formed Office Automation Division. There the focus switched from R&D to commercialization, and in spite of Engelbart's efforts, the human/organizational work was cut off, including his carefully cultivated user group. In 1984 Tymshare was acquired by McDonnell Douglas Corporation, where Engelbart began working closely with the aerospace components on issues of integrated information system architectures and associated evolutionary strategies (a welcome extension of his work at SRI).

... snip ...

... tymshare was a couple miles up the road from sjr. for the M/D purchase ... i was brought in to do due diligence on gnosis which had been developed by tymshare and was being spun off as keykos
http://www.agorics.com/Library/keykosindex.html
http://cap-lore.com/CapTheory/upenn/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Internet today -- what's left for hobbiests

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Internet today -- what's left for hobbiests
Newsgroups: alt.folklore.computers
Date: Sat, 15 Oct 2005 11:00:28 -0600
a little drift with recent posts that are slightly network related historical stuff ... or at least web related (with some dirft to network database) \https://www.garlic.com/~lynn/2005s.html#12 Flat Query

other posts in the thread
https://www.garlic.com/~lynn/2005s.html#3 Flat Query
https://www.garlic.com/~lynn/2005s.html#4 Flat Query
https://www.garlic.com/~lynn/2005s.html#6 Flat Query
https://www.garlic.com/~lynn/2005s.html#8 Flat Query
https://www.garlic.com/~lynn/2005s.html#9 Flat Query

slightly more internet or at least communication network related (as opposed to network database)
https://www.garlic.com/~lynn/2005s.html#1 Type A ,Type B
https://www.garlic.com/~lynn/2005s.html#10 NEW USA FFIES Guidance
https://www.garlic.com/~lynn/2005s.html#11 Type A ,Type B

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Flat Query

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Flat Query
Newsgroups: comp.databases.theory
Date: Sun, 16 Oct 2005 10:12:13 -0600
"Roy Hann" writes:
If that is so then it is puzzling. Why would anyone think of describing something N-dimensional as being "flat"?

it would make some sense if it was used to refer to single table that only had a primary index. ... it that case, primary index look up would be very similar operation to a query that did a binary search on a sorted (flat) file (and non-indexed searches of a single table would also be similar to operations performed on records of a flat file).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data Encryption Standard Today

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data Encryption Standard Today
Newsgroups: alt.security
Date: Sun, 16 Oct 2005 11:24:34 -0600
Unruh <unruh-spam@physics.ubc.ca> writes:
No idea what this means. DES was replaced as teh standard by AES. DES is still used all over the place (Eg, I think ATMs). Unix-variant password hashing has changed over to an MD5 derived hash (No it is not MD5 anymore than crypt(3) is DES)

an issue was that brute-force attack on DES key was shown to be doable on the order of a day with some custom hardware.

there is use of 3des which involves three steps involving two keys (which gives you 112bit instead of 56bit as resistance to brute force attacks, each additional bit effectively doubles the attack effort).

one of the uses of single DES key has been DUKPT for transactions which have a lifetime of possible a couple seconds (i.e. derived unique key per transaction). an attack on a DUKPT key with a lifetime of a couple seconds needs to be done within the window of the transaction lifetime (aka it isn't so much for confidentiality but integrity).

some of it is still cost/benefit ratio for the attacker ... does the possible benefit from the attack ... justify the effort put into the attack (stituations possibly yielding couple million benefit is easier to justify an attack compared to attack that might only yield a couple hundred).

misc. past posts mentioning dukpt
https://www.garlic.com/~lynn/aadsm3.htm#cstech8 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/2003g.html#9 Determining Key Exchange Frequency?
https://www.garlic.com/~lynn/2003g.html#42 What is the best strongest encryption
https://www.garlic.com/~lynn/2003o.html#46 What 'NSA'?
https://www.garlic.com/~lynn/2004c.html#56 Bushwah and shrubbery
https://www.garlic.com/~lynn/2004f.html#9 racf
https://www.garlic.com/~lynn/2005k.html#23 More on garbage
https://www.garlic.com/~lynn/2005l.html#8 derive key from password
https://www.garlic.com/~lynn/aadsm18.htm#53 ATM machine security
https://www.garlic.com/~lynn/aadsm19.htm#36 expanding a password into many keys

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is a Hurricane about to hit IBM ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is a Hurricane about to hit IBM ?
Date: Sun, 16 Oct 2005 15:13:33 -0600
Newsgroups: bit.listserv.ibm-main
DASDBill2 writes:
This project paid for a large number of full-time IBM programmers under their Federal Systems Division who were working at this facility near Houston, among whom were the two original developers of HASP, Tom Simpson and Bob Crabtree. HASP evolved into JES2, recently discussed in another thread.

I don't think it is conspiratorial if you try accurately to predict the national economy 6 to 12 months into the future. I think it is better referred to as realism. If your predictions happen to be based on real facts that are unknown to or disbelieved by the masses, then so be it.


my wife did a stint in jes group reporting to crabtree ... working on architecture ... took a look at how to merge jes3 mutli-system operation with jes2 multi-access spool (there was even a period where executive direction that there would be no new jes2 development ... it would all go into jes3). this was before she got con'ed into going to be pok to be in charge of loosely-coupled architecture.
https://www.garlic.com/~lynn/submain.html#shareddata

some past collected hasp-related postings
https://www.garlic.com/~lynn/submain.html#hasp

there was a thread in a totally different n.g. in the mid-90s that was looking at doing economic predictions for 2020 (25 years out). a shorter term item looked at as part of this was that y2k remediation work was looming with requirements for significant additional resources. however, it happened to correspond to the internet bubble ... which was siphoning off all available resources into high flying internet jobs. not a lot of people were paying attention, that somewhat as a result, a lot of legacy bread & butter work was going offshore (at least not until much later after it was already a fait accompli).

i had the misfortune to predict that the company would go into the red ... about the time the corporate committee was predicting world-wide revenues were going to double from $60b to $120b ... and were spending enormous amounts on adding additional manufacturing capacity. i don't think they really understood the shift going on in computing processing to open & commodity priced hardware.

the scenario was somewhat a continuation of the economic analysis that had been behind some of the justification for future system ... some collected fs postings
https://www.garlic.com/~lynn/submain.html#futuresys

... note, i hadn't faired much better with FS ... at the time, i would periodically draw analogies between the FS project and a cult film that had been playing continuously for several years down in central sq (which didn't exactly make friends with enormous number of people backing FS).

one reference mentioning FS
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

part of the subject was the advent of clone controllers. when i was an undergraduate ... i got involved in project to reverse engineer the ibm channel interface and build our own controller ... someplace there was a write-up blaming us for inception of clone controller business misc. collected postings on 360 plug-compatible
https://www.garlic.com/~lynn/submain.html#360pcm

winscape?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: winscape?
Newsgroups: alt.folklore.computers
Date: Sun, 16 Oct 2005 15:42:14 -0600
blmblm@myrealbox.com (blmblm@myrealbox.com) writes:
I'm skeptical. People really didn't know that they were sharing physical resources with many other users? or if they knew, they didn't care because it didn't matter?

Maybe this was true with the systems you know about. It wasn't my experience using "timesharing" on IBM-and-compatible mainframes. (I'm putting that in quotation marks because one of the standard jokes had to do with whether TSO (Time Sharing Option -- the support for interactive users in MVS) was misnamed.)


recent thread on the subject in comp.arch. when we were having arguments about 3274 not providing interactive support ... the 3274 group effectively came back and said that 3274 wasn't designed to provide interactive support ... its design point was for data entry support ... being better than keypunches ... and the TSO people effectively came down on the side of the 3274 ... in effect that TSO was never designed to provide interactive support ... it was designed to provide data entry support.
https://www.garlic.com/~lynn/2005r.html#28 Intel strikes back with a parallel x86 design

misc. other recent posts mentioning 3274
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005h.html#40 Software for IBM 360/30
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#17 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#20 Intel strikes back with a parallel x86 design

for 360s, supposedly the official time-sharing product was tss/360 for 360/67 (360/65 with the additional of virtual memory support). it was having all sort of difficulties ... and somewhat as a result, univ. of mich came out with MTS (michigan terminal system) for 360/67 and science center did cp67 virtual machine timesharing system for 360/67.
https://www.garlic.com/~lynn/subtopic.html#545tech

note that multics was on the 5th floor and the science center was on the 4th floor and both multics and cp67 have common heritage w/ctss. I don't know of any from multics ... but there were two time-sharing service bureaus in the 60s that spun off to provide commercial time-sharing service. a little later tymshare was also using it to provide commercial time-sharing services ... misc. past postings referencing time-sharing services
https://www.garlic.com/~lynn/submain.html#timeshare

random drift .. recent mention of tymshare, interactive computing and hyperlinks in comp.databases.theory
https://www.garlic.com/~lynn/2005s.html#12 Flat Query

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

winscape?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: winscape?
Newsgroups: alt.folklore.computers
Date: Sun, 16 Oct 2005 15:52:19 -0600
blmblm@myrealbox.com (blmblm@myrealbox.com) writes:
Or in terms of "systems" meaning the total system, hardware plus software. They don't distinguish the individual parts, or know which ones can be ....

Oh. I guess they do know that individual parts of the hardware can be replaced/upgraded/changed. It just doesn't occur to them, I guess, that the software is also something that can be replaced.


i think that there was a study in the early 80s that the shift in majority of failure causes had moved from hardware to software and people.

when we were doing ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

we talked to an operation with five-nines availability and currently had deployed some fault tolerant hardare based system. however, they discovered that once-a-year software maint. downtime would blow nearly a century worth of downtime budget.

for some slight topic drift
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

MVCIN instruction

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Mon, 17 Oct 2005 10:30:26 -0600
Newsgroups: bit.listserv.ibm-main
Doug Fuerst wrote:
Actually, there was no single interface board in the 67, which used 2880/2860/2870 outboard channel units. And those DID have access to the SDBI/SDBO (Storage Data Bus In and Storage Data Bus Out)/

sorry for the short hand ... the control unit channel interface board didn't actually talk to the memory bus and hold the memory bus ... the control unit channel interface board told the channel interface to obtain and hold the memory bus and the control unit channel interface board told the channel interface to release the memory bus.

the administrative logic for deciding to obtain, hold, and release the memory bus was in the control unit channel interface board ... the actual obtaining, holding, and releasing of the memory bus was done by the channel interface (under direction of the control unit channel interface board).

the control unit channel interface board told the channel to obtain, hold and release the memory bus. the bug was in our control unit channel interface board ... failing to instruct the channel regarding the obtaining and releasing of the memory bus ... allowing the location 80 timer to update the location 80 timer memory location.

so it wasn't actually a bug in our control unit channel interface board actually obtaining and holding the memory bus ... it was a bug in our control unit channel interface board instructing the channel to obtain and hold the memory bus ... and not instructing the channel to release the memory bus at frequently enuf intervals allowing the high-speed location 80 timer on the 360/67 to update the memory location 80 timer value.

so the next time you create a ccw to read and write disk records .... it isn't the ccw that reads & writes the disk records ... the ccws (channel command words) in the channel program are executed by the channel ... which passes on commands to disk control unit, which in turns passes on commands to the disk drive. the disk drive then instructs specific r/w heads to transfer data.

MVCIN instruction

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Mon, 17 Oct 2005 11:31:18 -0600
Newsgroups: bit.listserv.ibm-main
... for 360/67 multiprocessor ... it even got more complex. smp 360/67 introduced a channel controller that sat between memory, processor, and channels.

the channel controller had switch settings for multiprocessor mode or partitioned modes. in multiprocessor mode ... all processors could address all channels ... as opposed to standard 360 multiprocessors where specific channels where directly associated with specific processor. it wasn't until 3081 & 370-xa that capability re-appeared so that all processors could address all channels. 360/67 had virtual memory hardware support and both 24-bit and 32-bit virtual memory modes (and more than 24-bit virtual addressing didn't also re-appear until 370-xa).

from bitsavers ... 360 channel oem manual
http://www.bitsavers.org/pdf/ibm/360/A22-6843-3_360channelOEM.pdf

360-67 functional characteristics
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf

which gives some detail on control registers and channel controller. in multiprocessor with channel controller ... the control registers gave the sense values of the channel controller switch setting ... aka r/o ... although there was a custom triplex 360/67 built for some gov. project that allowed the processor to change the channel controller switch settings by changing control registers values.

360/67 smp also had multiported memory. 360/67 started out basically a 360/65 with virtual memory hardware added and weren't cache machines. high i/o rates would create memory bus contention for a simplex 360/67 processor and could have noticable impact on instruction thruput because of the memory bus contention.

the standard simplex 360/67 had memory cycle of 750ns for double word access. if you were running in address translation, the DAT box added an extra 150ns for every double word access (900ns).

for 360/67 smp, the channel controller and multi-ported memory added additional memory bus delay ... but reduced memory bus contention under heavy i/o load. a half-duplex 360/67 (i.e. smp 360/67 partitioned with dedicated memory boxes, channels, and a single processor) could have higher instruction thruput than simplex 360/67 under heavy I/O load (because of the reduced memory bus contention).

some basic timings (assuming real mode and w/o DAT box) from func. char manual


                        67-1    67-2
add             rr      0.65    0.69
add             rx      1.4     1.63
balr            rr      1.2     1.24
bal             rx      1.2     1.43
compare         rx      1.4     1.63
multiply        rx      4.8     5.03

MVCIN instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Mon, 17 Oct 2005 18:00:08 -0600
Newsgroups: bit.listserv.ibm-main
Patrick O'Keefe wrote:
There was at least one model of 1403 (with a print bar rather than a chain). There may have been other models, too. And yes, 2514s were the priniple tape drives. I don't remember what disks it used. But use of some of the same peripherals doesn't make it part of the same family of processors.

there was 1443 printer with print bar that went back and forth (sometimes you found 1443 and 2501 card reader in remote student submission/output areas)

pending availability of 360/67 (360/65 with virtual memory dat box) ... the science center tried to get a 360/50 to modify and add virtual memory for.
https://www.garlic.com/~lynn/subtopic.html#545tech

unfortunately, the FAA air traffic control project was taking up all the available/spare 360/50s. science center eventually settled for a 360/40 to do custom development of virtual memory hardware. cp/40 and cms was originally developed on this 360/40 with hardware modifications for virtual memory. when 360/67 was available ... they ported cp/40 to 360/67 for cp/67.

360/67 had both 24-bit and 32-bit virtual addressing options, basr/bas instructions, and smp support included a channel controller ... which included all processors being able to address all channels (and take interrupts from all channels). recent 360/67 smp post
https://www.garlic.com/~lynn/2005s.html#20 MVCIN instruction

CAS was doing work on fine-grain locking with CP67 on 360/67 smp at the science center and invented compare&swap (mnemonic chosen because they correspond to CAS's initials) ... which eventually showed up in 370
https://www.garlic.com/~lynn/subtopic.html#smp

past posting in similar thread (includes a list of previous postings mentioning cp40)
https://www.garlic.com/~lynn/2002b.html#7 Microcode

following excerpted from melinda's vm370 history paper at
https://www.leeandmelindavarian.com/Melinda#VMHist

In the Fall of 1964, the folks in Cambridge suddenly found themselves in the position of having to cast about for something to do next. A few months earlier, before Project MAC was lost to GE, they had been expecting to be in the center of IBM's time-sharing activities. Now, inside IBM, ''time-sharing'' meant TSS, and that was being developed in New York State. However, Rasmussen was very dubious about the prospects for TSS and knew that IBM must have a credible time-sharing system for the S/360. He decided to go ahead with his plan to build a time-sharing system, with Bob Creasy leading what became known as the CP-40 Project. The official objectives of the CP-40 Project were the following:

1. The development of means for obtaining data on the operational characteristics of both systems and application programs; 2. The analysis of this data with a view toward more efficient machine structures and programming techniques, particularly for use in interactive systems; 3. The provision of a multiple-console computer system for the Center's computing requirements; and 4. The investigation of the use of associative memories in the control of multi-user systems.

The project's real purpose was to build a time-sharing system, but the other objectives were genuine, too, and they were always emphasized in order to disguise the project's ''counter-strategic'' aspects. Rasmussen consistently portrayed CP-40 as a research project to ''help the troops in Poughkeepsie'' by studying the behavior of programs and systems in a virtual memory environment. In fact, for some members of the CP-40 team, this was the most interesting part of the project, because they were concerned about the unknowns in the path IBM was taking. TSS was to be a virtual memory system, but not much was really known about virtual memory systems. Les Comeau has written: Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

Virtual memory on the 360/40 was achieved by placing a 64-word associative array between the CPU address generation circuits and the memory addressing logic. The array was activated via mode-switch logic in the PSW and was turned off whenever a hardware interrupt occurred. The 64 words were designed to give us a relocate mechanism for each 4K bytes of our 256K-byte memory. Relocation was achieved by loading a user number into the search argument register of the associative array, turning on relocate mode, and presenting a CPU address. The match with user number and address would result in a word selected in the associative array. The position of the word (0-63) would yield the high-order 6 bits of a memory address. Because of a rather loose cycle time, this was accomplished on the 360/40 with no degradation of the overall memory cycle. The modifications to the 360/40 would prove to be quite successful, but it would be more than a year before they were complete.

The Center actually wanted a 360/50, but all the Model 50s that IBM was producing were needed for the Federal Aviation Administration's new air traffic control system.

One of the fun memories of the CP-40 Project was getting involved in debugging the 360/40 microcode, which had been modified not only to add special codes to handle the associative memory, but also had additional microcode steps added in each instruction decoding to ensure that the page(s) required for the operation's successful completion were in memory (otherwise generating a page fault). The microcode of the 360/40 comprised stacks of IBM punch card-sized Mylar sheets with embedded wiring. Selected wires were ''punched'' to indicate 1's or 0's. Midnight corrections were made by removing the appropriate stack, finding the sheet corresponding to the word that needed modification, and ''patching'' it by punching a new hole or by ''duping'' it on a modified keypunch with the corrections.


... snip ...

MVCIN instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Tue, 18 Oct 2005 09:23:13 -0600
Newsgroups: bit.listserv.ibm-main
DASDBill2@ibm-main.lst wrote:
HASP accessed its SPOOL data this way until users began to complain that they had no program that would back up/restore SPOOl volumes to tape. Then the HASP team made this record alternation an option. The thought was that accessing every other record in sequence would provide a little boost in performance. The same technique was (and maybe still is) used in CMS files. HASP's record alternation option was removed when HASP was replaced with JES2.

vm370 used 101 byte filler records on 3330 drives for page formated stuff (paging, spool, chkpt, directory, etc). 3330 had 3 4k page records per track with 101 byte filler records between the page records ... 57 4k page records per cylinder.

in cp67, the original code did fifo single record transfer at a time. i added code for disk (2314) that would do ordered seek queueing and disk/drum (2314 & 2301) would chain multiple requests (for same cylinder) in single i/o. on 2301 ... the single record per i/o thruput peaked at around 80 records/sec. with chaining, the thruput could hit 300 records/sec.

for vm370, on 3330 ... you would like to transfer 3 pages per revolution ... however, queued page requests for the three slots (1,2,3) might not be on the same track ... which could involve doing a head-switch to pick up the next consecutive (rotational) record on a different track. the additional head switch ccw resulted in additional end-to-end processing latency (while the disk continued spinning) and would result in the start of the next page record having rotated past the head by the time the i/o transfer processing had come up. the dummy filler records was to increase the rotational latency before the start of the next page record came under the head ... and hopefully the channel/controller/drive then would have had time to do the extra ccw processing.

i did a bunch of benchmarking with different filler record sizes, channels, processors, ... as well as disks/controllers from different vendors. default channel processing spec. required 110 byte filler record for the extra head-switch ccw processing latency. standard 3330 spec. only had room on the track for 101 byte fillers. 158 & below processors had integrated channels (i.e. the processor engine was time-shared between executing 370 microcode code and executing channel microcode). 168 had external hardware channels that had high performance and lower latency. 4341 integrated channels tended to have latency close to 168 external hardware channels. 158 integrated channels had the highest processing latency.

for the 303x ... an external channel director box was used. the 303x channel director was actually the 158 processor engine with the 370 microcode removed leaving only the integrated channel microcode. a 3031 was basically a 158 processor engine with only the 370 microcode (and the integrated channel migrated removed) and configured to use a 303x channel director (in some sense a 3031 was actually a two-processo smp ... except the two processors were running different microcode loads). A 3032 was a 168-3 configured to use channel directors. A 3033 started out being 168-3 wiring/logic design mapped to faster chip technology (and configured for channel directors). All of the 303x channel tests showed the same channel i/o processing latency as the 158 channel tests.

originally on cp67 ... and then ported to vm370 ... i had done a remap of the cms filesystem to use page mapped semantics with a high-level virtual machine interface using the kernel paging infrastructure. As an undergraduate, I had created a special I/O interface for cms disk i/o that drastically reduced the pathlength processing (and eventually turned into diag i/o). The page mapped semantics further reduced the pathlength overhead (since it eliminated various operations of simulating a real i/o paradigm in a virtual address space environments) and allowed me to do all sorts of optimization tricks performing the i/o operation (a lot of fancy optimization tricks had been done in the kernel paging environment ... that now were free for cms filesystem i/o). For instance, somewhat because of the real i/o paradigm orientation, cms only did chained i/o if records for file were sequentially consecutive allocated on the disk. page slot chaining code didn't care which order they were on the disk ... if there were requests for the same cylinder ... just chain up all pending requests and let it rip (regardless of things like file sequential consecutive considerations). misc. collected posts on having done page mapped semantics for cms filesystem ... originally on cp67 in the early 70s
https://www.garlic.com/~lynn/submain.html#mmap

misc. past posts on filler records
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001n.html#16 Movies with source code (was Re: Movies with DEC minis)
https://www.garlic.com/~lynn/2002b.html#17 index searching
https://www.garlic.com/~lynn/2003f.html#40 inter-block gaps on DASD tracks
https://www.garlic.com/~lynn/2003f.html#51 inter-block gaps on DASD tracks
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore

winscape?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: winscape?
Newsgroups: alt.folklore.computers
Date: Tue, 18 Oct 2005 10:18:09 -0600
rpl writes:
Partitioning is a mainframe thing; it's designed to take control out of the hands of individual end-users and place it in the hands of the operators whose job it is to make sure that the terminal-services don't get too bogged down, that month-end isn't still running the next day, that the programmer who bought the box of beer that's sitting under the floorboards can jump to the head of his priority class when the need arises, etc.

science center did project in 1965
https://www.garlic.com/~lynn/subtopic.html#545tech

for virtual machine thing (partitioned) called cp40 on a 360/40 (that they had customed modified with virtual memory hardware). when the official 360 product with virtual memory hardware came out, 360/67, they moved cp40 to the 360/67 and called it cp67 ... recent post
https://www.garlic.com/~lynn/2005s.html#21 MVCIN instruction

this morphed into vm370 for 370s. in the 370 time-frame some amount of the virtual machine support stuff started migrating into the hardware. eventually this migration culminated in LPARS (logical partition) which is part of nearly every mainframe operation today.

an old virtualization reference:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

however, it does seem to have infected the rest of the industry. small sample of news items from the past couple days mentioning virtualization:

Microsoft: is a virtual revolution about to start?
http://www.cbronline.com/article_feature.asp?guid=08D22C6D-9B9A-40BC-90AB-297E25E55D84
Microsoft Makes Its Move
http://www.crn.com/sections/microsoft/microsoft.jhtml?articleId=172301189
Reducing browser privileges
http://online.securityfocus.com/infocus/1848
Microsoft to shelve per processor prices for users willing to get virtual
http://www.theregister.co.uk/2005/10/11/ms_virtual_change/
Microsoft simplifies its virtualisation licences
http://www.scoopt.org/article13105-microsoft-simplifies-its.html
Resource Virtualization and Disaster Recovery System with McDATA Directors
http://home.businesswire.com/portal/site/google/index.jsp?ndmViewId=news_view&newsId=20051017005292&newsLang=en Virtualization roils traditional licensing models
http://news.yahoo.com/s/infoworld/20051017/tc_infoworld/69754;_ylt=A9FJqZlSwlND0ocAvAYjtBAF;_ylu=X3oDMTA5aHJvMDdwBHNlYwN5bmNhdA--
Battling Complexity (virtualization)
http://www.computerworld.com/hardwaretopics/storage/story/0,10801,105434,00.html
VMware Unveils Next Generation of Industry-Leading Data Center Products: ESX Server 3 and VirtualCenter 2
http://biz.yahoo.com/prnews/051017/sfm073.html?.v=27
VMware Unveils Next Generation of Industry-Leading Data Center Products: ESX Server 3 and VirtualCenter 2
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/10-17-2005/0004170334&EDATE=
VMware Eases Virtual Machine Management
http://www.eweek.com/article2/0,1895,1871605,00.asp
Virtualizing Server Farms
http://www.internetnews.com/ent-news/article.php/3556611
More Than 60 Leading Independent Software Vendors Back VMware Virtual Infrastructure
http://biz.yahoo.com/prnews/051017/sfm074.html?.v=25
BMC Promises Capacity Control For Virtualized Environments
http://www.informationweek.com/story/showArticle.jhtml?articleID=172301436
NAS Virtualization Poised to Double in the Next Year
http://sanjose.dbusinessnews.com/shownews.php?newsid=47270&type_new
BMC moves into virtualization resource management
http://www.cbronline.com/article_news.asp?guid=B67C43F0-C102-4E3D-A20B-726B30918799
A new licensing scheme from Microsoft will encourage server-based software users to virtualize, and save cost.
http://www.cmpnetasia.com/oct3_nw_viewart.cfm?Artid=27802&Catid=1&subcat=9&section=Features Virtualization and GRID computing heading in similar directions
http://weblog.infoworld.com/gridmeter/archives/2005/10/virtualization.html
Platform Computing signs BNP Paribas Arbitrage to GRID package
http://www.finextra.com/fullstory.asp?id=14401 VIRTUALIZATION FOR GRID COMPUTING
http://www2.platform.com/products/virtualization/
VMware Upgrades Virtualization Gear
http://www.techweb.com/wire/networking/172301659
BMC Intros Virtualization Suite
http://www.informationweek.com/showArticle.jhtml?articleID=172301590
VMware upgrades data center software, ambitions
http://news.com.com/VMware+upgrades+data+center+software%2C+ambitions/2100-1010_3-5897924.html?tag=nefd.top
VMware upgrades virtualization gear
http://www.cmpnetasia.com/oct3_nw_viewart.cfm?Artid=27814&Catid=5&subcat=46&section=News
BMC debuts virtualization suite
http://www.cmpnetasia.com/oct3_nw_viewart.cfm?Artid=27812&Catid=8&subcat=76&section=News
Breaking News--VMware Boosts VM Scalability with ESX Server 3
http://www.itjungle.com/tfh/tfh020705-story01.html
VMware upgrades data center software, ambitions
http://news.zdnet.com/2100-3513_22-5897924.html
XenSource Vs. VMware Battle Imminent
http://www.crn.com/sections/breakingnews/dailyarchives.jhtml?articleId=172301643
VMware Updates Virtualization Solutions
http://www.thewhir.com/marketwatch/vmw101705.cfm
BMC Intros Virtualization Suite
http://informationweek.com/story/showArticle.jhtml?articleID=172301590
Open Source virtual server software a likely dark horse
http://searchwin2000.techtarget.com/originalContent/0,289142,sid1_gci1134765,00.html
BMC Begins 'Virtual' Initiative
http://www.eweek.com/article2/0,1895,1872430,00.asp
Increasing the Load: Virtualization Moves Beyond Proof of Concept in the Volume Server Market, According to IDC
http://home.businesswire.com/portal/site/google/index.jsp?ndmViewId=news_view&newsId=20051018005052&newsLang=en
VMware Upgrade Will Double CPU Support, Automate Tasks
http://www.computerworld.com/softwaretopics/software/story/0,10801,105458,00.html
VMware Unveils Data Center Products
http://www.webhosting.info/news/1/vmware-unveils-data-center-products_1017053603.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Tue, 18 Oct 2005 14:55:48 -0600
echomko_at_@polaris.umuc.edu (Eric Chomko) writes:
: Kerberos has the disadvantage of being more complex than simple : shared-secret authentication (eg classic Unix username and password : hash), but less sexy than, say, X.509 certificates (which get a lot : of attention because of SSL). And it has other competitors, such as : RADIUS. So it tends to get short shrift in the classroom.

we use to go by and visit/checkup on project athena projects periodically, including kerberos. i have some memory of sitting thru a presentation & discussion of the (at the time very recent) cross-domain kerberos operation.

note that the original pkinit draft for kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos

specified simple certificate-less, digital signature authentication
https://www.garlic.com/~lynn/subpubkey.html#certless

... w/o requiring x.509 identity certificates. it was later that somebody had the bright idea to add x.509 option to the kerberos pkinit draft standard (i've periodically gotten email from somebody apologizing for instigating that mistake).

there are also, certificate-less, digital signature authentication implementations for radius
https://www.garlic.com/~lynn/subpubkey.html#radius

minor, somewhat historical reference ... i once was actually somewhat involved in doing a radius configuration for a vendor's box (that had originated radius, before they got bought, and radius offered up as ietf standard). trivia question ... what was the name of the vendor that originated radius?

we got brought in to consult with a small client/server startup in silicon valley that wanted to do payment transactions ... and they had this thing called https & ssl
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

during that effort, we coined the term certificate manufacturing
https://www.garlic.com/~lynn/subpubkey.html#manufacture

to differentiate that environment from x.509 identity certificates and PKI
https://www.garlic.com/~lynn/subpubkey.html#sslcert

for some fun ... i've taken to periodically asserting that first payment gateway was the original SOA.

some relatively recent posts mentioning x.509 identity certificates
https://www.garlic.com/~lynn/aadsm17.htm#4 Difference between TCPA-Hardware and a smart card (was: examp le: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#12 A combined EMV and ID card
https://www.garlic.com/~lynn/aadsm17.htm#18 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#19 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#21 Identity (was PKI International Consortium)
https://www.garlic.com/~lynn/aadsm17.htm#23 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#26 privacy, authentication, identification, authorization
https://www.garlic.com/~lynn/aadsm17.htm#34 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#41 Yahoo releases internet standard draft for using DNS as public key server
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#5 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#22 [anonsec] Re: potential new IETF WG on anonymous IPSec
https://www.garlic.com/~lynn/aadsm18.htm#31 EMV cards as identity cards
https://www.garlic.com/~lynn/aadsm18.htm#52 A cool demo of how to spoof sites (also shows how TrustBar preventsthis...)
https://www.garlic.com/~lynn/aadsm18.htm#55 MD5 collision in X509 certificates
https://www.garlic.com/~lynn/aadsm19.htm#11 EuroPKI 2005 - Call for Participation
https://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
https://www.garlic.com/~lynn/aadsm19.htm#17 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm19.htm#24 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#33 Digital signatures have a big problem with meaning
https://www.garlic.com/~lynn/aadsm19.htm#45 payment system fraud, etc
https://www.garlic.com/~lynn/aadsm19.htm#49 Why Blockbuster looks at your ID
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#5 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#8 UK EU presidency aims for Europe-wide biometric ID card
https://www.garlic.com/~lynn/aadsm20.htm#11 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#17 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#21 Qualified Certificate Request
https://www.garlic.com/~lynn/aadsm20.htm#36 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#38 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#39 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#40 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#42 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#12 Payment Tokens
https://www.garlic.com/~lynn/2005f.html#62 single-signon with X.509 certificates
https://www.garlic.com/~lynn/2005g.html#45 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005h.html#27 How do you get the chain of certificates & public keys securely
https://www.garlic.com/~lynn/2005i.html#2 Certificate Services
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#4 Authentication - Server Challenge
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005i.html#33 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005k.html#60 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005l.html#25 PKI Crypto and VSAM RLS
https://www.garlic.com/~lynn/2005l.html#29 Importing CA certificate to smartcard
https://www.garlic.com/~lynn/2005l.html#33 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#36 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#15 Course 2821; how this will help for CISSP exam ?
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005n.html#33 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005n.html#39 Uploading to Asimov
https://www.garlic.com/~lynn/2005n.html#51 IPSEC and user vs machine authentication
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2005q.html#13 IPSEC with non-domain Server
https://www.garlic.com/~lynn/2005q.html#23 Logon with Digital Siganture (PKI/OCES - or what else they're called)
https://www.garlic.com/~lynn/2005r.html#54 NEW USA FFIES Guidance
https://www.garlic.com/~lynn/2005s.html#10 NEW USA FFIES Guidance

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

MVCIN instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Wed, 19 Oct 2005 07:17:07 -0600
Newsgroups: bit.listserv.ibm-main
Robert A. Rosenberg wrote:
You are confusing two separate schemes. In my case the data was written in standard order. Since I was using BDAM (it might have been BSAM where I was supplying the record number I wanted) I just REQUESTED the records in reverse order (after I used the EXCP CCW string to find the last record on the current track).

HASP WROTE the records in the order (and numbered as in the Count Section) 1 4 2 5 3 6 (assuming 6 blocks per track). This allowed it to read the full track in two revolutions of the drive. If it had tried to read a normally numbered track the channel was too slow to catch the next record on that revolution. The out-of-sequence-interleaved numbering allowed the channel to catch up and be ready for the next block before it passed the read head. By JES2 days the channels were fast enough to keep up with the DASD (as well as there being cached buffering and read-track commands/etc.).


in the early 70s, the stuff for page-map support of cms filesystem (originally on cp67):
https://www.garlic.com/~lynn/submain.html#mmap

picked up the earlier pathlength stuff i had done for stylized fastpath ccw for cms disk i/o (that eventually turned into diag 18) ... but it also eliminated a lot of the overhead of simulating real i/o operations in virtual memory environment (prefetch & lock/pin of pages before starting the complete i/o, unpinning all the pages when done, etc).

the other thing was that cms file i/o only did chaining when file records were sequential and contiguous. i had originally done the chaining logic for page i/o for cp67 ... where each page transfer was independent operation and multiple could be chained together for optimal disk operation ... regardless of the original order or source of the request. this met that different requests for the same shared area from different tasks could be chained together ... or that chains could be re-ordered (i.e. multiple records for the same file could be randomly ordered on the same cylinder ... page mapped interface would queue the request ... and optimal reordering and chaining would fall out in the standard page support).

there was also some tricks about looking at system contention and dynamically deciding to build huge i/o transfers ... and/or delay stuff ... part of the stuff was that using the paging interface ... you could have asynchronous operation transparent to the application ... by using virtual memory hardware to provide necessary serialization control.

note that in migration of os/360 to virtual memory environment ... resulted in similar need to do all the real i/o simulation processes for virtual memory environment. as mentioned before ... one time when we were doing some stuff 3rd shift in pok machine room ... I think it was Ludlow(?) was working on VS2/SVS prototype ... basically using 360/67, taking mvt and cobbling together a little bit of single virtual address space layout and a low-level page fault and replacement handler. the other part was taking cp67's CCWTRANS and hacking it into the side of MVT for doing all the steps for translating and shadowing CCWs, fetching/pinning virtual pages, untranslating, unpinning. etc.

a few old posts about hacking cms disk i/o pathlength as undergraduate:
https://www.garlic.com/~lynn/99.html#95 Early interupts on mainframes
https://www.garlic.com/~lynn/2003.html#60 MIDAS
https://www.garlic.com/~lynn/2003k.html#7 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2005b.html#23 360 DIAGNOSE
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP

IEH/IEB/... names?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IEH/IEB/... names?
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 19 Oct 2005 08:11:51 -0600
Jay Maynard writes:
Z manuals were supposed to be IBM internal use only, but IBMers would often obtain copies for their customers.

there was

<unclassified>
internal use only
confidential
confidential - restricted
registered confidential

all the "Z" I saw were confidential and customer might have to sign something to get a copy. i wrote a few science center reports
https://www.garlic.com/~lynn/subtopic.html#545tech

on things like page mapped filesystem support
https://www.garlic.com/~lynn/submain.html#mmap

that got Z'ed. there were also "Y" document prefix ... that were frequently program logic manuals.

... registered confidential had all copies numbered and had to be kept in double locked cabinets. site security had list of all registered confidential documents and would peridically perform audits as to them still being in your possession.

at one point, i had collected a file cabinet drawer full of the 811 documents (for 11/78) that were registered condidential (various 370-xa hardware, software and architecture documents).

when we started doing the online phone book ... the paper copies were typically stamped internal use only. early on, various plant site people from around the world would refer us to their site security people before giving up machine readable copy. the site security people would insist that machine readable versions of the phone book had to be classified at confidential (at a minimum) ... and the idea that we would be collecting machine readable copies from all the sites ... appeared to boggle their mind. after some amount of effort, we got a couple major sites to relent (san jose, pok, etc) and let us have the machine readable as purely internal use only. after that, we would deal with local site security people by referring them to other sites that had already relented.

then there was the case of the cern tso/cms bakeoff share report (circa 1974?), the internal corporate copies got stamped confidential - restricted, available on a need-to-know basis only (only appropriately authorized employees were allowed to see the tso/cms comparison done by cern).

misc. past posts mentioned online phonebook work:
https://www.garlic.com/~lynn/2000b.html#92 Question regarding authentication implementation
https://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2000g.html#35 does CA need the proof of acceptance of key binding ?
https://www.garlic.com/~lynn/2001j.html#29 Title Inflation
https://www.garlic.com/~lynn/2001j.html#30 Title Inflation
https://www.garlic.com/~lynn/2002e.html#33 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002m.html#19 A new e-commerce security proposal
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2003p.html#20 Dumb anti-MITM hacks / CAPTCHA application
https://www.garlic.com/~lynn/2004c.html#0 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004l.html#32 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2005c.html#38 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#43 History of performance counters
https://www.garlic.com/~lynn/2005s.html#3 Flat Query

misc. past references to corporate security classifications
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#20 mainframe question
https://www.garlic.com/~lynn/2001n.html#37 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#14 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#51 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002n.html#37 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003k.html#13 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#21 TSO alternative
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004n.html#17 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005f.html#42 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#49 Moving assembler programs above the line

random past posts mentioning cern:
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/2000e.html#23 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2000f.html#61 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001g.html#24 XML: No More CICS?
https://www.garlic.com/~lynn/2001g.html#54 DSRunoff; was Re: TECO Critique
https://www.garlic.com/~lynn/2001h.html#11 checking some myths.
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#20 mainframe question
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001n.html#40 Google increase archive reach
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#14 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#51 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002i.html#37 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002n.html#35 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#37 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002o.html#54 XML, AI, Cyc, psych, and literature
https://www.garlic.com/~lynn/2003.html#54 Timesharing TOPS-10 vs. VAX/VMS "task based timesharing"
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003h.html#19 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003k.html#13 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004c.html#26 Moribund TSO/E
https://www.garlic.com/~lynn/2004c.html#27 Moribund TSO/E
https://www.garlic.com/~lynn/2004d.html#39 System/360 40th Anniversary
https://www.garlic.com/~lynn/2004d.html#53 COMPUTER RELATED WORLD'S RECORDS?
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#17 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005l.html#10 who invented CONFIG/SYS?
https://www.garlic.com/~lynn/2005s.html#12 Flat Query

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

whats the difference between Encryption and Encoding ? or both are same ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: whats the difference between Encryption and Encoding ? or both are same ?
Newsgroups: sci.crypt
Date: Wed, 19 Oct 2005 08:18:36 -0600
Paul Leyland writes:
Another widely used definition is that encoding transforms semantic elements (words, phrases, commands, etc). The term is used whether or not any notion of secrecy is important --- that is, the codebook which specifies the transformation may be public or not depending on circumstances.

another scenario is a digital signature ... where the hash of a message is encoded with the private key. the recipient is expected to be able to recalculate the hash, decode the digital signature with the appropriate public key and compare the two hashes.

from the security PAIN acronym
P privacy
A authentication
I integrity
N non-repudiation


a digital signature is targeted at authentication and integrity w/o having to address privacy (secrecy or confidentiality).

for a little drift ... i've sporadicly mentioned that sometimes there appear to be semantic confusing "digital signature" and "human signature" ... since they both contained the word "signature". various past collected postings on the signature subject
https://www.garlic.com/~lynn/subpubkey.html#signature

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

MVCIN instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MVCIN instruction
Date: Thu, 20 Oct 2005 10:16:20 -0600
Newsgroups: bit.listserv.ibm-main
lynn@garlic.com wrote:
in the early 70s, the stuff for page-map support of cms filesystem (originally on cp67:
https://www.garlic.com/~lynn/submain.html#mmap


one issue with chained requests for cms file operation ... was cms filesystem (dating back to 1965) used single block allocation at a time and didn't have any semantics for supporting explicit contiguous allocation. the standard cms filesystem multi-block read was on the off chance that sequnetial allocation of individual file blocks accidentally happened to be sequential on disk (which might happen if all files were sequentially dumped to tape and all files then erased ... creating a clean filesystem, and then sequentially reloadint the files). so i added some new allocation semantics that could create advisery request for multiple, physical contiguous blocks. this helped with being able to do multi-block chained i/o operations.

i also used a variation on the page mapped interface in the 80s for doing a rewrite of the vm370 spool file systtem ... somewhat as part of hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

the base spool filesystem was written in assembler as part of the kernel ... it provided a "high-speed" interface for applications that involved synchronously moving 4k blocks back and forth across the kernel line. this synchronous character became a problem for vnet/rscs when trying to support lots of high-speed links. a heavily loaded spool file system might be doing 40-50 4k transfers a second. however, because it was a synchronous api, rscs/vnet was non-runanable during the actual disk transfers ... and never could have more than one request active at a time. as a result rscs/vnet might only get 5-6 4k spool transfers per second (competing with all the other uses of the spool system).

so my objectivs for the hsdt (high speed data transfer) spool file system (SFS) rewrite were:
1) move implementation from kernel to virtual address space
2) implement in vs/pascal rather than assembler
3) overall pathlength of the new pascal-based implementation running in a virtual address space should be less than the existing assembler, kernel implementation
4) support contiguous allocation
5) support multiple block transfer requests
6) support asynchronous transfer requests


so cp kernel needed several modifications ... first it had to be able to come up w/o having a spool system active at initial boot (like it was custom to), be able to activate the spooling subsystem for managing spool areas ... and handle spooling activity by making up-calls into the spool processor address space.

an annoyance in the existing implementation was that all spool areas were treated as one large resource ... all spool resrouces had to be available ... or the system didn't come up. the kernel now had to be able to operate independently of the spool resource. so while i was at it, i added some extended integrity and availabilitity. each spool physical area could essentially be independently activated/deactivate (varied on/off). there was an overall checkpoint/warm start facility ... however there was additional information added to spool records ... that if checkpoint and warm start information was missing ... it was possible for the spooling resource to sequentially physical read a physical area (it could generate paged mapped requests for 150 4k blocks at a time ... and the kernel might even chain these into a single physical i/o, aka if it happened to be a 3380 cylinder) and recover all allocated spool files (and nominally do it significantly faster than the existing cp ckeckpoint process ... which sort of had starting records for each file ... but then had to sequentially following a chain of records, one read at a time). if warm start information wasn't available ... the large sequential physical read tended to be significantly faster than the one at a time, checkpoint scatter read.

the standard kernel spool implementation had sequentially chained control blocks representing each file. for large active spool system, the implementation spent significant pathlength running the sequential chains. the pascal implementation used library routines for hash table and red/block tree management of all the spool file control blocks. this tended to more than offset any pathlength lost moving the function into virtual address space.

the high-speed spool api was extended to allow specifying multiple 4k blocks for both reads & writes ... and enhanced to allow the api to operate asynchronously. a single full-duplex 56kbit link could mean around up to 2 4k transfers per sec (1 4k transfers in each direction). several loaded 56kbit links could easily run into spool file thruput bottleneck on heavily loaded systems (rscs/vnet possibly be limited to 5-6 4k records/sec)

hsdt machine had several channel connections to other machines in the local area and multiple full-duplex T1 (1.5mbits/sec) connections. a single T1 has about 30 times the thruput of a 56kbit ... which in turn increases the two 4k record thruput requirements to 60 4k record thruput per second (for a single full-duplex T1 link). an hsdt vnet/rscs node might reasonably be expected to have thruput capacity of several hundred 4k records/sec (design point thruput possibly one hundred times a nominal rscs/vnet node).

hsdt operated three sat. stations, san jose, austin, and yorktown ... with hsdt node having multiple channel and T1 links to other machines in the local area. the sat. bandwidth was initially configured as multiple T1 full-duplex links between the three nodes. however we designed and were building a packet broadcast operation. The earch stations were TDMA so that each station had specific times when it could transmit. The transmit bursts could then be configurated to simulate full-duplex T1 operation. The packet switch-over was to eliminate the telco T1 emulation and treat it purely as packet broadcast architecture (somewhat analogous to t/r lan operation but w/o the time-delay of token passing since the bird in the sky provided clock synchronization for tdma operation).

the san jose hsdt node was in bldg. 29, but there were high-speed channel links to other machines in bldg. 29 and telco T1 links to other machines in the san jose area ... besides the sat. links.

one of the challenges was that all corporate transmission had to be encrypted. the internal network had been larger than the whole arpanet/internet from just about the beginning until sometime mid-85.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

arpanet was about 250 nodes at the time it converted to tcp/ip on 1/1/83. by comparison, later that year, the internal network passed 1000 nodes ... minor reference
https://www.garlic.com/~lynn/internet.htm#22

note the size of the internal network does not include bitnet/earn nodes ... which were univ. nodes using rscs/vnet technology (and was about the same size as arpanet/internet in the period. misc. posts mentioning bitnet &/or earn:
https://www.garlic.com/~lynn/subnetwork.html#bitnet

about the time we were starting hsdt, the claim was that the internal network had over half of all link encryptors in the world. moving from an emulated telco processing for hsdt also eliminated the ability to use link encryptors ... so we had to design a packet-based encryption hardware that potentially was changing key on every packet ... and aggregate thruput hit multiple megabytes/second. we further complicated the task by establishing an objective that the card could be manufactured for less then $100 (using off-the-shelf chips ... and still support mbyte/sec or above thruput). also wanted to be able to use it in lieu of standard link encryptors which were running $16k.

the other piece was hsdt nodes was making a lot of use of HYPERchannel hardware .... so when the initial mainframe tcp/ip implementation was done in pascal ... i added rfc 1044 support. the base product shipped with 8232 controller which had some idiosyncrasies; the support would consume a whole 3090 processor getting 44kbytes/sec. by contrast, in some 1944 tuning tests at cray research, we got 1mbyte channel speed between a 4341-clone and a cray machine ... using only a modest amount of the 4341 processor.
https://www.garlic.com/~lynn/subnetwork.html#1044

having drifted this far ... i get to also mention that we weren't allowed to bid on nsfnet backbone (the arpanet change over to tcp/ip protocol on 1/1/83 was major technology milestone for the internet. however, the birth of modern internetworking ... i.e. operational prelude to the modern internet ... was the deployment of the nsfnet backbone ... supporting internetworking of multiple networks) however, my wife appealed to the director of nsf and got a technical audit ... which concluded that what we had running was at least five years ahead of all nsfnet bids to build something new. minor recent post on the subject:
https://www.garlic.com/~lynn/2005q.html#46 Intel strikes back with a parallel x86 design

past posts mentioning SFS ... spool file system rewrite (as opposed to that other SFS that came later ... shared file system):
https://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2001n.html#7 More newbie stop the war here!
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2004g.html#19 HERCULES
https://www.garlic.com/~lynn/2004m.html#33 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#3 History of C
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005n.html#36 Code density and performance?

past post mentioning link encryptors
https://www.garlic.com/~lynn/aepay11.htm#37 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm14.htm#0 The case against directories
https://www.garlic.com/~lynn/aadsm14.htm#1 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm18.htm#51 link-layer encryptors for Ethernet?
https://www.garlic.com/~lynn/99.html#210 AES cyphers leak information like sieves
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002j.html#52 "Slower is more secure"
https://www.garlic.com/~lynn/2003e.html#34 Use of SSL as a VPN
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
https://www.garlic.com/~lynn/2003i.html#62 Wireless security
https://www.garlic.com/~lynn/2004g.html#33 network history
https://www.garlic.com/~lynn/2004g.html#34 network history
https://www.garlic.com/~lynn/2004p.html#44 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#51 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#55 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2005c.html#38 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a parallel x86 design

IBM 3330

From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers
Subject: Re: IBM 3330
Date: Fri, 21 Oct 2005 01:01:17 -0700
a couple recents posts in another n.g. discussing some 3330 i/o programming aspects
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction

Internet today -- what's left for hobbiests

From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers,alt.cyberpunk
Subject: Re: Internet today -- what's left for hobbiests
Date: Fri, 21 Oct 2005 01:04:23 -0700
recent post in another n.g. mentioning a little bit about early internet
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction

MVCIN instruction

From: <lynn@garlic.com>
Newsgroups: bit.listserv.ibm-main
Subject: Re: MVCIN instruction
Date: Fri, 21 Oct 2005 01:19:14 -0700
Shmuel Metz , Seymour J. wrote:
There's no tag signal that requires the channel to lock the memory bus. It's an implementation issue whether the channel does or doesn't in any specific scenario. I should have an OEMI manual around somewhere, but if I wait someone else will dig out his copy ;-)

see immediately following post.

you replied to
https://www.garlic.com/~lynn/2005s.html#19 MVCIN instruction
posted 10:30 17oct

the immediately following post had pointer to online scan of channel oem manaul
https://www.garlic.com/~lynn/2005s.html#20 MVCIN instruction
posted 11:31 17oct

Random Access Tape?

From: <lynn@garlic.com>
Newsgroups: alt.comp.hardware,alt.computers,alt.folklore.computers
Subject: Re: Random Access Tape?
Date: Fri, 21 Oct 2005 11:10:16 -0700
Carl Pearson wrote:
Howdy, Group,

Been having a conversation with this guy regarding tape vs disc.

He asked if a hard or floppy disk was more like a tape recorder, or a record player.

I'm siding with record player, due to tape's inability to have random access.

I know, record players are WORM drives, and they don't record with metal oxide, but the random access feature seems so important that it outweighs tape's next-bit-in-line way of reading data.


a little drift ... thread in c.d.t that touched on tape/flat files and sequential access and migration to indexed & random access operation
https://www.garlic.com/~lynn/2005s.html#3 Flat Query?
https://www.garlic.com/~lynn/2005s.html#4 Flat Query?
https://www.garlic.com/~lynn/2005s.html#6 Flat Query?
https://www.garlic.com/~lynn/2005s.html#8 Flat Query?
https://www.garlic.com/~lynn/2005s.html#9 Flat Query?
https://www.garlic.com/~lynn/2005s.html#12 Flat Query?
https://www.garlic.com/~lynn/2005s.html#14 Flat Query?

Power5 and Cell, new issue of IBM Journal of R&D

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: comp.arch
Subject: Re: Power5 and Cell, new issue of IBM Journal of R&D
Date: Fri, 21 Oct 2005 13:01:25 -0700
John F. Carr wrote:
20 years ago IBM threw two teams at the IBM RT, one of them porting BSD Unix to the bare hardware and another porting (writing?) AIX to run on a supervisor layer (VRM). The BSD (called AOS) was preferred by those who could get their hands on it.

slightly more complicated ... romp was going to be the displaywriter follow-on, used cp.r and pl.8. business analysis eventually showed something like the smallest, entry level romp was more expensive than the top of the displaywriter market and the project was canceled. the group looked around and somewhat found that anybody could port unix to their machine and call it a unix workstations. they invented the VRM (virtual resource manager) for the pl.8 programmers to do ... and hired the company that had done the at&t port for pc/ix to do one to the abstract vrm interface (supposedly the justification was that would take less work than doing port directly to bare hardware interface).

in the mean time the acis group in palo alto were working on a bsd port to 370. minor folklore ... i had been trying to talk one of the people that had done vs/pascal to do a C front-end for it (for a 370 c compiler). he disappeared one summer and showed up at metaware in santa cruz. i suggested to the palo alto group that they could contract with metaware for 370 c compiler. somewhere along the way, the palo alto group got redirected to target the bsd port to the pc/rt (instead of 370 ... pc/rt was already out in the market). this became aos for the pc/rt (and also used metaware c compiler). The palo alto group took some pride in pointing out that aos/bsd port to pc/rt bare metal took less effort than either the vrm implementation or the AT&T port to the vrm interface.

there were misc. & sundry other rivalries between austin and palo alto. austin had done journaled file system for rios/power (aix v3) using special "database" 801 hardware ... claiming that it was more efficient and less work than modifying the filesystem code to perform traditional logging and commit calls. the palo alto group took the jfs code and reworked it for a "portable" version ... didn't rely on 801 hardware ... instead had traditional database logging and commit calls. The benchmarked both versions on same rios/power hardware ... and the version that didn't use the special hardware was faster. recent post on the subject
https://www.garlic.com/~lynn/2005r.html#27 transactional memory question

misc. past 801, romp, rios, fort knox, etc collected posts
https://www.garlic.com/~lynn/subtopic.html#801

Power5 and Cell, new issue of IBM Journal of R&D

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: "" <lynn@garlic.com>
Newsgroups: comp.arch
Subject: Re: Power5 and Cell, new issue of IBM Journal of R&D
Date: Fri, 21 Oct 2005 23:57:46 -0700
a little hypervisor topic drift from a thread in a.f.c ... somewhat related to mainframe hypervisors
https://www.garlic.com/~lynn/2005s.html#23

i've previously posted on the erep/ras issue for mainframe unix ports

1) at&t unix infrastructure and api on top a stripped down tss/370 kernel called ssup. only available internally within at&t 2) Amdahl uts (called gold before announce) .... typically ran under virtual machine hypervisor 3) aix/370 (& aix/ps2), ucla locus port ... also ran under virtual machine hypervisor (done by the same palo alto group that had done the bsd port to pc/rt for aos)

these mainframes were frequently multi-million dollar affairs with cadre of people doing service and preventive maintenance. the field service people claimed they couldn't/wouldn't do their job without the appropriate erep/ras (somewhat imagine automobile analogy and your multimillion dollar investment is out of warranty because of not getting its service).

the issue was that erep/ras was a major component of mainframe operating system ... a significantly larger undertaking than straight forward unix port to mainframe. the tss/370 ssup and the virtual machine hypervisors provided this erep/ras function on behalf of the unix port (w/o the significant effort of having to build a unix-based erep/ras implementation)

for quite a bit of topic drift ... some posts about doing erep/ras stuff for the disk engineering lab
https://www.garlic.com/~lynn/subtopic.html#disk

random past posts mentioning mainframe unix ports
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001.html#44 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2001l.html#19 mainframe question
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002i.html#54 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh
https://www.garlic.com/~lynn/2002j.html#36 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002p.html#45 Linux paging
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003h.html#45 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003i.html#53 A Dark Day
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003o.html#49 Any experience with "The Last One"?
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2004h.html#41 Interesting read about upcoming K9 processors
https://www.garlic.com/~lynn/2004h.html#42 Interesting read about
upcoming K9 processors
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2005b.html#22 The Mac is like a modern day Betamax
https://www.garlic.com/~lynn/2005c.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#28 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#5 Single System Image questions
https://www.garlic.com/~lynn/2005j.html#26 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005m.html#4 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005m.html#7 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005q.html#14 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#26 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#49 What ever happened to Tandem and NonStop OS ?

Filemode 7-9?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Subject: Re: Filemode 7-9?
Date: Sat, 22 Oct 2005 01:33:20 -0700
David Kreuter wrote: > or the developers left Cambridge to go write DEC/VMS and unix ancestors.

the cp67 development group split off from the science center (on the 4th flr)
https://www.garlic.com/~lynn/subtopic.html#545tech

and absorbed the boston programming center on the 3rd flr. a lot of the morph from cp67 to vm370 was done there ... eventually the group outgrew the 3rd flr space and moved out to the old SBC bldg in burlington mall (sbc had gone to cdc as part of some litigation settlement).

there was constant pressure to kill vm370 and the next release was always the last. eventually pok got vm370 and burlington killed on the grounds that all the people had to be moved to pok to support the vmtool ... which was a 370-xa internal use only implementation ... needed for supporting mvs/xa development.

one issue was that endicott had a rapidly expanding market for vm based on explosion in 148 (and later 4341) vm sales. endicott finally was able to save vm370 from being completely killed and allowed that some of the people might move to endicott ... rather than everybody needed to move to pok for supporting mvs/xa development.

however, some number of people didn't want to leave the boston area. vms was just getting started and some number of people went to dec ... i remember some also went to prime computers.

there was some joke that one of the major contributions to vms was by the head of pok lab.

Filemode 7-9?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Subject: Re: Filemode 7-9?
Date: Sun, 23 Oct 2005 00:00:27 -0700
ref:
https://www.garlic.com/~lynn/2005s.html#35 Filemode 7-9?

two other pieces of folklore somewhat related to the closing of burlington mall location

1) the news of burlington mall closing and killing the product (since all the people were required to support mvs/xa development) was leaked to somebody in the group. there was then an extended period where everybody was interviewed to try and identify who leaked the information. a cload of suspicion pretty much hung over the place until the bldg. was actually vacated.

2) somebody had completely rewritten most the os/360 simulation support (there was the joke at the time that the 8mbyte os/360 simulation in mvs might be doing a few more things than the 64kbyte os/360 simulation in cms). somewhat fading memory, but the rewrite really did a lot more of complete os/360 simulation, i believe a bunch of bdam bells and whistles were added as well as a bunch of stuff for both reading and writing os/360 formated filesystems. the announcement that vm370 and burlington was being killl put all such new stuff on the shelf ... and the person responsible was one of the people that went off to DEC. i'm pretty sure that the work was never revived and just evaporated with the closing of the burlington mall location.

.....

part of endicott's install base had really started to ramp up with ecps for the 138/148 ... minor ref
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

and, in fact, there was an attempt to make the ecps announcement appear that vm & hypervisor was integrated into every machine shipped (somewhat like the current generation of mainframe lpars) ... however this was eventually overruled as being "non-strategic".

then you find 4341 and vax/vms pretty much competing head-to-head in the same market segment. some price/performance and feature threshold had been crossed ... and you found large corporations placing 4341 orders for several hundred at a time. there was a similar explosion in the deployment of 4341s inside the corporation ... which also contributed to some of the large explosion in the number of nodes on the internal network. minor recent post
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
which was follow-on to
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction

Von Neumann machines. The key to space and much else

From: <lynn@garlic.com>
Newsgroups: sci.physics,rec.aviation.military,sci.space.policy,alt.folklore.computers
Subject: Re: Von Neumann machines. The key to space and much else.
Date: Sun, 23 Oct 2005 04:24:27 -0700
Charlie Springer wrote:
Not so fast. Traffic is building back up in Silicon Valley and Google has cut a deal to build a huge facility at Moffett Field. A friend of mine just interviewed with them (He is very good and wrote a number of books on the inner mysteries of C and Java for SUN and worked at Apple). He wasn't interested and said the place is full of kids talking very fast and full of Jolt and Monster and other high caffeinee super drinks. It looks like a crash-and-burn business culture but they are making a lot of money at the moment.

triva ... who moved into the bldg at the other end of ellis 10 years ago?

here is a hint
http://www.svlug.org/directions/veritas.shtml

MVCIN instruction

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: MVCIN instruction
Date: Mon, 24 Oct 2005 00:30:12 -0700
Leonard Woren wrote:
Certain instruction streams ran not much faster on the 3090 than they did on the 3081. Other instruction streams ran many times faster, even though if I recall the 3090 was nominally only 2-3 times the speed of the 3081 (~30 MIPS vs 14 MIPS?) The newer the hardware the more complex this gets, so how exactly would IBM publish a timing manual?

3081d engine was only slightly faster than 3033 ... about five mips. 3081k engine was around seven mips (two processor aggregate 14mips).

for two-processors, 370 typically slowed down each engine by 10 percent (compared to simplex) in order to allow for x-cache chatter activity. originally 3081 was never targeted as having a single processor operation (3081 was two processors, 3084 was two 3081s tied together for four processors). somewhat because tpf (airline control program) didn't have multiprocessor support, 3083 was brought out ... which had the cycle time unslowed down ... so 3083j engine was nearly 15 percent faster than 3081k engine.

a recent thread in comp.arch mentioning pok strong memory consistency and cycle slow-down
https://www.garlic.com/~lynn/2005r.html#46 Numa-Q Information

some older posts mentioning kernel allocation cache-line work in the 3081 time-frame:
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208

a quick use of search engine turns up
http://homepage.virgin.net/roy.longbottom/mips.htm

a little bit from above:
This document contains performance claims and estimates for more than 2000 mainframes, minicomputers, supercomputers and workstations, from around 120 suppliers and produced between 1980 and 1996. Speed is given as MIPS (Millions of Instructions Per Second). Maximum MFLOPS (Millions of Floating Point Instructions Per Second) are also provided, usually for systems with supercomputer vector processing capabilities. Where available, production dates and cost is also shown. Details of IBM's larger mainframes and PC CPUs up to 2004 have also been included.

....


Manufacturer      No. of    OS       MHz    MIPS    MAX   Type Year  Cost
Processor          CPUs  CPU chip           Claim  MFLOPS            £=GBP

IBM                      OS VM/MVS
3083 E              1       ECL      38.5    4.7           MF  1982  $1.2M
3083 B              1       ECL      38.5    6.9           MF  1982   $2M
3083 J              1       ECL      38.5    8.9           MF  1982   $3M
3081 D              2       ECL              11            MF  1981
3081 G              2       ECL      38.5   12.6           MF  1983  $3.3M
3081 K              1       ECL      38.5    16            MF  1982  $4.3M
3084 Q              4       ECL      38.5    28            MF  1983  $8.7M
3083 CX             1       ECL      41.7    3.7           MF  1985  $605K
3083 EX             1       ECL      41.7     5            MF  1984  $960K
3083 BX             1       ECL      41.7    7.3           MF  1984  $1.7M
3083 JX             1       ECL      41.7    9.5           MF  1984  $2.6M
3081 GX             2       ECL      41.7   13.6           MF  1984  $2.8M
3081 KX             2       ECL      41.7   17.8           MF  1984  $3.4M
3084 QX             4       ECL      41.7    31            MF  1984  $6.9M

IBM                      OS MVS
                         Cost excludes VF (MFLOPS results)
3090-120E           1       ECL      54.1     9     108    MF  1987  $985K
3090-150E           1       ECL      56.2   12.2    112    MF  1987 $1.65M
3090-180E           1       ECL      58.1   19.4    116    MF  1987  $2.7M
3090-280E           2       ECL      58.1   35.5    232    MF  1988  $4.9M
3090-200E           2       ECL      58.1   35.5    232    MF  1987  $4.6M
3090-300E           3       ECL      58.1    52     348    MF  1987  $5.6M
3090-400E           4       ECL      58.1    64     464    MF  1987  $8.4M
3090-500E           5       ECL      58.1    78     580    MF  1988  $9.7M
3090-600E           6       ECL      58.1    89     696    MF  1987 $10.3M

... snip ...

Filemode 7-9?

From: <lynn@garlic.com>
Subject: Re: Filemode 7-9?
Date: Tue, 25 Oct 2005 01:31:15 -0700
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
an example of 4341 uptake ... from old post
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

a little more drift related to growth of internal network in 83
https://www.garlic.com/~lynn/internet.htm#22

the 4341 follow-on was originally going to be an 801-based chip
https://www.garlic.com/~lynn/subtopic.html#801

however, there was a document produced showing that it was now possible to get 370 in silicon. the 801 strategy had been to take all the internal microprocessors and converge to a single chip architecture for microprocessors. however, the document showed that is was getting to be possible to directly implement 370 in silicon ... as opposed to taking a simpler microprocessor silicon engine and implementing 370 in microprogramming. however, by the time that 4381 shipped ... the great exploding market segment that saw 4341 and vax uptake was starting to move to workstations and high-end PCs. slightly related posting
https://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| question

some of the vax numbers during the period:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

Filemode 7-9?

From: <lynn@garlic.com>
Newsgroups: bit.listserv.vmesa-l
Subject: Re: Filemode 7-9?
Date: Sat, 29 Oct 2005 00:33:37 -0700
Bob Shair wrote: >This could be the answer to all our problems.................... :-) > >Science and Technology News > >The Zurich laboratory unveiled the world's fastest chip this >week. The chip, code named "Timeless", is based on high >temperature superconductors and is capable of transferring data >signals faster than the speed of light. This makes it possible >for a computer based on this chip to produce answers before >questions are asked. > The chip is no doubt made of silicon doped with Resublimated Thiotimoline.

this is somewhat the issue raised in this reference posting from comp.arch ... 360/370s, continuing thru 3090, etc was on path that would have required infinite fast cycle for caches in order to provide memory consistency as number of processors scaled up. their influence then somewhat impacted on rochester as they were looking as providing cache synchronization and memory consistency by a different approach (than requiring infinitely fast cache cycle time, note that signal propagation running at only a few multiples of the speed of light would not be sufficient).
https://www.garlic.com/~lynn/2005r.html#46 Numa-Q Information

in the above reference by rochester ... there is some chance that the individual cited ... i may have been their direct report at some point.

Random Access Tape?

Refed: **, - **, - **
From: <lynn@garlic.com>
Newsgroups: alt.comp.hardware,alt.computers,alt.folklore.computers
Subject: Re: Random Access Tape?
Date: Sun, 30 Oct 2005 11:22:44 -0800
jmfbahciv@aol.com wrote:
Part of the art of writing disk device drivers is to arrange the retrieval/storage addressing so that the physical specs of the device don't interfere with efficiency. This is where the seek times, revolution rates and controller specs are used.

recnet posting in bit.listserv.ibm-main on redoing the implementation for multiple transfers per revolution
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction

one was redoing the cp67 implemention supporting the 2301 "drum" increasing the peak 4k page transfers from about 80/sec to 300/sec. the other was the handling of the 3330 disk when trying to do sequential transfers of 4k pages that might reside on different track ... but at the same arm/cyl. position.

i also did some further stuff when doing the page mapped filesystem support ... playing games with re-ordering requests for optimal revolution and arm motion ... even when the requests were originally presented in some totally different ordering. recent post in the same referenced thread
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction

note that the cms filesystem i/o was something left over from real i/o paradigm ... that then had to be simulated in a virtual memory environment. the "simulation" process would execute the disk i/o in the order pass to it. the changes for page mapping allowed the i/o ordering to be re-organized for optimal execution. some more on that subject
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction

feasibility of certificate based login (PKI) w/o real smart card

Refed: **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.security.crypto
Subject: Re: feasibility of certificate based login (PKI) w/o real smart card
Date: Mon, 31 Oct 2005 09:37:46 -0800
Babji wrote:
Mike,

I agree that, what you have raised is a fundamental issue which has been solved by smart card and that keeps smart card a good 'known' solution as of today. But my concern is about OS 'support' for similar login without a 'real smart card' with an 'assumption' that the credentials can be stored 'safely' on an alternative memory. I don't intend to discuss how to protect the credentials on a non-smart card in this thread. To be more specific , the problem can be broken down into following sub problems :

1.How to carry your credentials ? (cert - that says who you are) with you (roaming).
2.How to protect the credentials ? (private key, makes your cert tamper proof)
3.How to use these credentials to authenticate to the system ? ( Windows Kerberos Extension supports PKI login using 'smart card').
4 ??? ...

Smart card solves problem 1 and 2 and windows support smart card to authenticate. But I am more concerned about 3 alone in my experiment. Especially from the implementation perspective.


the core technology is asymmetric key cryptography .... what one key (of a key-pair) encodes, the other key decodes (to differentiate from symmetric key cryptography where the same key is used for both encoding and decoding).

there is a business process called public key ... where on of the keys (of an asymmetric key pair) is identified as public and made readily available; the other key (of the pair) is identified as private, kept confidential and never divulged.

there is a business processed called digital signature ... where a hash of a message/document is computed and encoded with the private key resulting in the digital signature. The message/document along with the (encoded hash) digital signature is transmitted. The recipient (or relying party) then can recompute the hash, decode the digital signature with the corresponding public key and compares the two hashes. if the two hashes are the same, then the recipient can assume

1) the message/document hasn't changed since the digital signature was originally calculated 2) something you have authentication, aka the originator has access to and use of the corresponding private key.

from 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


... the verification of the digital signature typically represents something you have authentication.

the digital signature business process has assurance and integrity dependencies on the level of preserving the confidentiality of the private key. in many cases, private keys are protected by purely software means which have numerous threats and vulnerabilities. Various kinds of hardware security tokens have been developed for improving the confidential integirty and assurance of the private key
https://www.garlic.com/~lynn/subintegrity.html#assurance

the original pk-init draft for kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos

originally just specified registering a public key in lieu of password and performing straight-foward digital signature verification in lieu of password matching. subsequently, the support for PKIs (certification authorities, digital certificates, etc) was added to pk-init draft .... supplementing straight-forward authentication via digital signature verification
https://www.garlic.com/~lynn/subpubkey.html#certless

there is this business process frequently referred to as PKI which was targeted at the letters of credit (or letters of introduction) paradigm from the sailing ship days. it was somewhat methodology for addressing the offline email operation from the early 80s ... i.e. dial-up the (electronic) postoffice, exchange email, hangup, and then process email. The recipient could possible then be faced with processing first-time email from a total stranger. The idea behind PKIs and x.509 identity certificates was to provide sufficient information for the recipient to decide how to deal with first-time communication from an otherwise total stranger ... and not having any local resources AND having no direct communication mechanism for determining anything about the total stranger.

In the early 90s, this was somewhat expanded ... to suggest that x.509 identity certificates should be appended to all operations ... converting even the simplist of authentication operations into heavy weight identification operations. It was further aggravated by certification authorities not necessarily knowing all possible identification information that a relying party might require about a total stranger. This somewhat led in the direction of attempts to grossly overload x.509 identity certificates with enormous amounts of personal information.

In the extension of kerberos pk-init to include support for PKI ... it basically implies that a total stranger, by presenting an x.509 identity certificate, will be authenticated and allowed to access the system.

in fact, most such systems don't actually do this ... effectively negating the basic PKI design point ... and with a little analysis, it is then trivial to demonstrate that the actual digital certificates are redundant and superfluous.

recent post with similar theme
https://www.garlic.com/~lynn/aadsm21.htm#20 Some thoughts on high-assurance certificates

Somebody is certified to access the system, their public key is then registered in the infrastructure ... as per the original kerberos pk-init draft ... and possibly additional information is also certified and recorded along with the public key. For instance, integrity information that might be worth certifying is whether or not the private key is protected by some form of hardware token and the level of protection said hardware token actually provides. Another pieces of certified integrity information might be whether the correct operation of the hardware token requires a PIN. A relying party that knows the integrity level of the private key protection might be willing to allow higher risk than if they had no information regarding the private key protection. Also a hardware token with a PIN requirement then can imply two-factor authentication ... aka from
something you have
something you know
something you are


a PIN would imply something you know authentication in addition to the something you have authentication represented by the token. From a threat model standpoint .... PINs can be viewsd as countermeasure to lost/stolen tokens.

The corresponding process w/o using a hardware token is a software container where the processes surrounding the access and use of the software container approx. that of a hardware token. The software container may have the contents encrypted and a PIN/PASSWORD required for every extraction and use of the private key (this corresponds to the hardware token PIN countermeasure to a lost/stolen token).

An advantage of the hardware token ... is that most software containers can be replicated/copied ... possibly w/o the owner's knowledge. The attacker then can spend some brute force recovering the private key and be able to impersonate the rightful owner for some period. The other countermeasure for lost/stolen hardware token, is the owner reports it as lost/stolen (which allows the corresponding public key to be revoked). In the case of the software container being replicated/copied, the owner may not even realize that it has happened.

Note that the issue of PKIs and digital certificates are an additional business process layered on top of digital signature verification for simple authentication. The PKI and digital certificate business process layers were created so that stand-alone identification and processing could be performed for total strangers by the relying party w/o needing any previous contact and/or any other sources of information.

P2P Authentication

Refed: **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: comp.security.misc
Subject: Re: P2P Authentication
Date: Mon, 31 Oct 2005 12:51:19 -0800
Edward A. Feustel wrote:
Using mod_ssl with an Apache web service, you can specify mutual authentication. In this case the serving program and the client program both "sign" data, that is use their private keys to encrypt data that they send to the other. Using the public key of the serving program, the client program decrypts the encrypted data. The serving program does the same with the encrypted data provided by the client program. Assuming a correct decryption for both pieces of data (and assuming that the private keys have not been compromised), both the client and the server have been authenticated.

Usually https web transactions only require that the server be authenticated.

All this code could "easily" be adapted to provide SSL authentication of both parties in your P2P case.


the basic technology is asymmetric key cryptography ... what one key (of a key-pair) encodes, the other key decodes (to differentiate from symmetric key where the same key is used for both encoding and decoding).

there is a business process where one key (of the key pair) is identified as public and made readily available; the other key is identified as private and kept confidential and never divulged.

there is a business process called digital signature where a hash of a message/document is calculated and encoded with the private key, resulting in the digital signature. the originator then transmits the document/message along with the digital signature. The recipient recalculates the hash, decodes the digital signature with the appropriate public key (taken from the recipient's trusted public key repository) and compares the two hashes. If the two hashes are the same, then the recipient assumes that

1) the message hasn't be modified since the digital signature was calculated
2) something you have authentication, i.e. the originator has access and use of the corresponding private key.

to somewhat address the business opportunity analogous to the letters of credit from the sailing ship days ... there is business process involving PKI, certification authorities, and digital certificates. The target is somewhat the offline email from the early 80s, i.e. dial-up the (electronic) post-office, exchange email, hang-up and process email. The recipient may be dealing with first-time communication from total stranger with no recourse to information about the stranger.

in this scenario, the stranger has gone to a certification authority and had certain information certified. the certification authority issues a digital certificate containing the certified information and the applicant's public key. this digital certificate is digitally signed by the certification authority. in this scenario, the stranger appends their digital ceritifcate to the digitally signed message/document (as in the above description). the recipient first performs the digital signature verification process on the digital certificate issued by the certification authority (i.e. retrieving the certification authority's public key from the recipient's trusted public key respository to verify the certification authority's digital signature on the digital certificate). once the digital certificate has been validated, the recipient then retrieves the originator's public key from the digital certificate (having already verified the certification authority's digital signature) and uses the originator's public key to verify the digital signature on the actual message.

Given that the originator's digital signature verifies, the recipient now can use the other information in the digital certificate to determine how to deal with first-time communication from total stranger.

one of the issues that came up with this approach was the work on x.509 identity certificates in the early 90s. an objective was to require that an x.509 identity certificate would be appended to all digitally signed operations, turning even the most trivial authentication operations into heavy duty identification operations (even when first time communication between complete strangers wasn't involved). The other issue was that it wasn't necessarily clear to certification authorities, at the time, exactly what identification information might be required by relying parties in the future ... so there was some direction to overload x.509 identity certificates with enormous amounts of personal information.

in the mid-90s there was some direction involving relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

because of the realization that enormous amounts of personal information in x.509 identity certificates represented significant privacy and liability issues. as a result, the entity information from the digital certificate was moved into some sort of repository entry at the relying-party, and replaced in the digital certificate with some sort of index value into the repository. however, it is trivial to show that such digital certificates became redundant and superfluous, in part because it basically invalidates the fundamental justifications for PKIs ... i.e. to provide a solution for first time communication between complete strangers.

for some topic drift ... a couple recent related postings on the subject
https://www.garlic.com/~lynn/2005s.html#42 feasibility of certificate based login (PKI) w/o real smart card
https://www.garlic.com/~lynn/aadsm21.htm#20 Some thoughts on high-assurance certificates

winscape?

Refed: **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers
Subject: Re: winscape?
Date: Wed, 02 Nov 2005 06:11:39 -0800
jmfbahciv@aol.com wrote:
Yes. Especially applications. App people have absolutely no idea about how to solve a computing problem for the _general timesharing public user_. Thus, a fix to underlying software that works for them is likely to break every other business' computing usage outside of their box.

the science center had done a lot of work on performance monitoring ... and created some performance monitoring tools that accumulated data 7x24 ... first on the science center machine
https://www.garlic.com/~lynn/subtopic.html#545tech

and then on hundreds of other machines (eventually accumulated something like decade of information). this help motivate the work on synthetic workloads, benchmarking, performance modeling, performance profiling and the work that eventually evolved into capacity planning.

on of the applications was an apl-based analytical model that evolved into the marketing support tool available on hone called performance predictor
https://www.garlic.com/~lynn/subtopic.html#hone

that allowed marketing people to input information about customer configuration and load and ask what-if questions (changes to load and/or hardware).

when i was getting ready to release the resource manager & fair-share scheduler
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

a wide spectrum of synthetic workloads and configuraitons were evaluated for calibrating the operation of the resource manager. then a modified version of the performance perdictor was created which examined a broad range of customer workload and configuration data and created an automated benchmark specification (configuration, system options, synthetic workload characteristics). the benchmark would be run and the results feed into the modified predictor. The predictor validated the expected results for the resource manager and then also used the information as input for choosing the next benchmark configuration. This process itereated eventually for 2000 benchmarks that took three months of elapsed time to run.
https://www.garlic.com/~lynn/submain.html#bench

note that the overall system technology was not only used for a lot of commercial and gov. installations ... but also used as the basis for a number of commercial time-sharing offerings (including the hone system that provided world-wide field, sales, and marketing support):
https://www.garlic.com/~lynn/submain.html#timeshare

some minor drift
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

winscape?

Refed: **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers
Subject: Re: winscape?
Date: Wed, 02 Nov 2005 13:27:55 -0800
jmfbahciv@aol.com wrote:
Now that I'm thinking about it, I bet the only data brought back inhouse from customer sites would have been a few facts gathered by Field Service. When people owned their gear, they also were leery about giving up their bits, too.

ref:
https://www.garlic.com/~lynn/2005s.html#44 winscape?

for a little topic drift ... there was an industry service that collected erep data (ras and error recording data) from large number of customer installations .... somewhat removed customer identifying information and published summaries on machines. customers could compare reliability data on different machines, models, disks, etc. both from vendor as well as all the clone processors, controllers, devices, etc.

in late 1980 time-frame, stl was going to move something like 300 ims programmers to an offsite location. they really hated the idea of having to access systems at stl via remote 3270. I wrote HYPERchannel driver support that was basically a form of channel extension ... which allowed moving "local" kind of devices to remove location ... and sort of allowed them to operate over high-speed telco links.

as part of doing the driver support for channel extension ... i had to translate various kinds of unrecoverable telco errors back into some form of emulated local channel error (where then the standard ras support would do additional retries/restarts as well as record the information).

when 3090 processor had been in customer shops for a year ... one of the 3090 product managers tracked me down. they had predicted that for a certain class of channel errors, the 3090 would have approx. 3-5 such errors .... per year across ALL 3090s (i.e. not 3-5 errors per 3090 per year ... but 3-5 errors per year total aggregate across all 3090s). the industry service was reporting something like 15-20 total errors of this kind for the previous year across all installed 3090s.

it turned out that the emulated channel error condition that i had chosen had gotten out into driver support that was shipped to some customers ... and that the additional 10-17 errors for the 3090 product line for the previous year was coming from a customer that had this channel extension software installed. the report of these additional dozen or so errors for the 3090 product line year was causing an enormous commotion.

so i looked at the problem for awhile ... and there was another channel related error condition that i could emulate ... that for all intent resulted in the same RAS process ... but didn't inflate the statistic of specific interest.

a couple recent postings on the same subject
https://www.garlic.com/~lynn/2005d.html#28 Adversarial Testing, was Re: Thou shalt have no
https://www.garlic.com/~lynn/2005e.html#13 Device and channel

misc. past collected hsdt postings
https://www.garlic.com/~lynn/subnetwork.html#hsdt

previous post in this thread mentioning time-sharing and 3274
https://www.garlic.com/~lynn/2005s.html#17 winscape?

other recent postings touching on time-sharing subject
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes backw ith a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#14 Intel strikes backw ith a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes backw ith a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#16 Intel strikes backw ith a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#28 Intel strikes backw ith a parallel x86 design
https://www.garlic.com/~lynn/2005s.html#26 IEJ/IEB/... names??

Various kinds of System reloads

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Subject: Re: Various kinds of System reloads
Date: Thu, 03 Nov 2005 04:09:58 -0800
blmblm@myrealbox.com wrote:
Would the distinction between "warm start" and "cold start" for OS/370 then be whether these queue files are re-initialized? I remember both terms being used with these systems, but not what the distinction between them was.

for cp67 & vm370 the "spool file system" .... basically emulated unit record information was subject to cold and warm start ... and then chkpt start was later added.

part of booting the system including bringing up the spool file system ... and was prerequisite to the system coming up .... so it had to come up. in part, the "spool file" disk area tended to be shared between spool files and dynamic virtual memory paging. if the area wasn't initialized and available ... you also didn't have paging ... integral to the operation of the system.

oiriginally a cold start was that it initialized the area to completely empty (all existing data was lost). warm shtudown would write in-storage allocation information to disk at shutdown typically on boot, this information would be reread back into memory. on things like power-failure ... the system went down w/o having written the warm start information and had to come up cold. cp67 also added automatic creation of a "dump file" (in the system spool area) on a kernel crash ... and then automatic reboot (kernel failure, dump, save warm data, reboot, and warm start could be done in a minute or two.

there is this story about somebody at mit making a kernel change to one of their systems in an i/o driver and having 26 system crashes and reboots in the course of the day. the story is that this prompted some improvements in multics ... since multics at the time supposedly might take an hr or so to come back up after a crash
https://www.multicians.org/thvv/360-67.html

part of this was that the cp67 & vm370 stuff was done at the science center on 4th flr of 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

and multics was being done on the 5th flr of 545 tech sq.

to handle the power failure case, chkpt start was added. during normal operation, a very abbreviated vesion of some in-storage control information was written to disk. if reboot selected chkpt start ... boot would read the abbreviated information and then use that to reread the spool area to recreate the rest of the information. this was somewhat analogous to fsck since the reread/reconstruct could take an hr for a spool system with a large number of files.

the automatic reboot would bring the system back to the point where users could log back in. however, over the years, lots of automated processes were added to normal operating environment ... and these required manual initialization.

for the automated benchmarking process, mention in recent post
https://www.garlic.com/~lynn/2005s.html#44 winscape?
https://www.garlic.com/~lynn/2005s.html#45 winscape?

i had added a boot process that could be used to start up these automated processes at boot as part of normal production operation. the purpose for automated benchmark
https://www.garlic.com/~lynn/submain.html#bench

was to allow rebooting between the automated benchmarking ... to have a clean, fresh startup between each benchmark. the automated benchmarking could run for days unattended ... with automated reboot and starting each new benchmark.

this feature was incorporated in standard released system and is now part of standard production vm operation.

later doing some high-speed networking support as part of hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

i did "spool file system" rewrite ... recent lengthy post discussing characteristics of sfs rewrite
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction

it was primarily to significantly improve the performance ... but while i was at it ... i changed the paradigm so that spool areas could be added & removed during normal operation (w/o rebooting the system). this had the side-effect that even if chkpt-like start was required .... the system reboot didn't have to be delayed until it finished (i also redid how the chkpt operation was implemented so that it was significantly quicker).

Gartner: Stop Outsourcing Now

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Gartner: Stop Outsourcing Now
Date: Thu, 03 Nov 2005 04:43:47 -0800
WM wrote:
Of course, but let's try a short analogy I will make up as I go: at Boeing aircraft, it's OK (perhaps not optimal) for the sales guys to be measured and incented by how many 777 orders they get each quarter. It's OK for the facilities management team to be given a goal to reduce the cost of plant and office HVAC every year by being more energy efficient. But I don't think the airframe engineers should switch from aluminum to magnesium wings just because Mg is 10% cheaper on the spot market than Al and besides, they heard from their friends at Airbus that it works "great." A reasonably large part of what (competent) IT organizations do is engineering-like, and is supposed to be driven by analysis, research, and known-good prior practices that will produce a specified result.

some topic drift ... when i was an undergraduate ... i got con'ed into teaching a one week computer class during spring break ... to the technical staff of the newly formed BCS. during summer break, i was hired as full-time boeing employee (for the duration of the summer) to help with the organization and operation of BCS (behing a full time employee, i got to park in the management parking lot at boeing hdqtrs just off boeing field). That summer, serial 003 747 was flying in the skys overhead as part of flight certification.

i remember going thru sales pitch and mock up of 747. one of the lines that stuck in my mind was that 747 would be carrying so many people that airports would ALWAYS use at least four jetways (two on each side) ... to avoid passenger congestion getting on and off.

part of the current issue is that IT is being used for a wide variety of purposes ... from trivial word processing to integral part of business decision making. outsourcing has supposedly been acceptable for commodity operations that don't actually provide any business competitive advantage. one confusion factor may be understanding what parts of IT are just commodity operations that provide no competitive advantage and which parts of IT are actually fundamental to the core business uniqueness.

one of my offspring had a co-op job when he was going to school at an air freight forwarding company (he answered phones and took freight orders ... he had to memorize most of the airport 3letter codes and frieght dimenisons sizes for most of the flying planes). one of the things they also handled was aog (not oag, official airline guide, aog ... airline on ground) for replacement equipment. typically parts were air freighted to the airport with the plane on the ground ... but periodically it required physical cuourier of the part to some place in the world. they had converted to emailing the equipment invoices to the part depot (instead of physical paper). during one of the past internet virus outbreaks, their email was offline for several days ... and they had to try to find real paper invoice forms and physically carrying the invoice to the part depot. it was quite traumatic.

one could claim that a large part of business communication has now been outsourced to the internet ... w/o necessarily understanding all the implications.

and of course, i couldn't finish w/o mentioning my favorite competitive business subjects ... john boyd ... collection of my past posts
https://www.garlic.com/~lynn/subboyd.html#boyd

and numerous other boyd pages from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2

sample from above:
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

Gartner: Stop Outsourcing Now

Refed: **, - **, - **
From: <lynn@garlic.com>
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Gartner: Stop Outsourcing Now
Date: Thu, 03 Nov 2005 06:09:29 -0800
re:
https://www.garlic.com/~lynn/2005s.html#47 Gartner: Stop Outsourcing Now

as an aside ... during that period ... there was a number of companies that sort of spun off their IT operations as independent business units. part of the issue was that internal corporate politics frequently interfered with running an efficient IT operation. there was frequently large capital requirements with long preparation periods ... capital justification, physical facilities, simple long times for order delivery. there were frequently all sorts of internal corporate politics that would interfer with managing all of the practical aspects in an efficient manner. separate business operation would create slightly more business like relationship between the IT organization and the rest of the business; some of it was as simple as changing IT from being a cost center to being a profit center (at least as far as books were concerned ... i.e. even if it was corporate funny money ... there were real live business transfers between business operations). there are some aspects of that in some of the current IT outsourcing ... where delivery of IT business operation to an outside business, manages to address some internal corporate politics that couldn't be addressed any other way.

phishing web sites using self-signed certs

Refed: **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: phishing web sites using self-signed certs
Date: Thu, 03 Nov 2005 14:09:44 -0800
Arshad Noor wrote:
While third-party verification is not the real issue, the issue is: can the third-party itself be trusted? Who remembers the Verisign debacle from a few years ago with the Class-3 digital certificates issued through a social engineering attack, in the name of Microsoft?

http://news.com.com/2100-1001-254586.html?legacy=cnet
http://www.eweek.com/article2/0,1895,1243314,00.asp


there is actually several separate issues, some of which are

• who is doing certification
• what process are they using for certification
• are they willing to accept liability associated with their certification
• how is the certification represented

using a taxonomy that clearly delineates the difference between certification of information from using digital certificates for representing that certification process ... somewhat shows up some of the fallicy of self-signed digital certificates .... part of this is sometimes people seem to be confusing the existance of a digital certificate as having some magical certification quality all by itself ... rather than as a representation of some certification process.

PKIs and digital certificates are a business process to address the letters of credit paradigm from the sailing ship days for offline certification representation ... i.e. the relying party has no mechanism for doing real-time and/or online checking the validity of the information. furthremore, current generation of certification authorities have tended to be independent 3rd parties who are checking with various authoritative agencies as to the validitity of some information and then issuing certificates that represent that such a checking process has been done. they typically haven't been the authoritative agency actually responsible for the verified information.

as the online world with the internet becoming more pervasive ... some of the authoritative agencies actually responsible for various kinds of information being verified have looked at providing online, real-time verification services associated with the information in question (as opposed to the stale, static certificate model that was designed to meet the needs of relying parties that had no direct way of actually contacting the authoritative agency for directly verifying the information).

to some extent, as the online, internet world has become more pervasive ... the target offline market for digital certificates has shrunk and there has been some migration to the no-value market segment. rather than the relying party being unable to directly contact the authoritative agency responsible for the information, the no-value market has the relying party doing operations where there is insufficient value justification for directly contacting the authoritative agency (aka no-value operations). even this market segment is shrinking as the internet is not only providing pervasive world-wide online connectivity but also drastically reducing the cost of that online connectivity world-wide.

a couple related posts on the subject:
https://www.garlic.com/~lynn/2005s.html#43 P2P Authentication
https://www.garlic.com/~lynn/aadsm21.htm#20 Some thoughts on high-assurance certificates
https://www.garlic.com/~lynn/aadsm21.htm#21 Some thoughts on high-assurance certificates

misc. collected past posts on ssl domain name server certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

misc. collected past posts on certification enviornments that can be done w/o requiring digital certificates for representing that certification
https://www.garlic.com/~lynn/subpubkey.html#certless

Various kinds of System reloads

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Subject: Re: Various kinds of System reloads
Date: Fri, 04 Nov 2005 04:22:48 -0800
Peter Flass wrote:
I think you're confusing a few things. In IBM terms, a SYSGEN was the procedure to tailor the OS (originally the complete system with bundling) to your environment, selecting supervisor options, job entry subsystem, compilers, etc. Although it was an assembly, the "stage 1" SYSGEN macros just generated a huge JCL deck. "Stage 2" consisted of running this generated jobstream to actually build your system. A full SYSGEN was very infrequent. More common was the IOGEN, which ran just one set of jobs to configure the peripherals.

At IPL time, IBM systems ran "NIP" which was a stupid program to load various parts of the OS and initialize the system. I don't know about TOPS-10, but most other mainframe OS's had similar programs - Multics used BOS, for example. I think nearly all PC OS's use this scheme now, to avoid having to package the entire system as one monolithic executable. AFAIK, Linux is the exception, but at the expense of having a lot of unneeded drivers linked into the kernel.


ref:
https://www.garlic.com/~lynn/2005s.html#46 Various kinds of System reloads
https://www.garlic.com/~lynn/2005s.html#50 Various kinds of System reloads

cp67 (and vm370) "sysgen" was even simpler. the txt decks (output from the assembler) were bundled together and booted from cards using a card loader from BPS (basic progamming system ... about the most primitive of the systems for s/360). when the BPS loader had gotten everything into memory image ... it transferred control to cpinit. cpinit wrote a page formated image (of what had just been loaded into memory) to disk ..... and then wrote the necessary disk boot/ipl records to read cpinit back into memory.

on disk boot/ipl, cpinit would be read back into memory ... it would recognize that it was being loaded from disk (rather than in the earlier bps loader p;rocess) and reverse the earlier process of page block i/o for the kernel image ... this time doing reads instead of writes. once the kernel image was back into memory, it would then do the cold/warm(/ckpt) process for spool file system. after that it would transfer control to normal kernel operation.

my modification for doing the automated benchmarking was one of the last things invoked by cpinit process before transferring control to standard kernel processing (cpinit in vm370 was renamed dmkcpi)
https://www.garlic.com/~lynn/submain.html#bench

this auomated startup procedure was released as part of the standard product.

random past posts mentioning bps loader:
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/98.html#9 ** Old Vintage Operating Systems **
https://www.garlic.com/~lynn/99.html#135 sysprog shortage - what questions would you ask?
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#26 HELP
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002h.html#35 Computers in Science Fiction
https://www.garlic.com/~lynn/2002i.html#2 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#72 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2003o.html#23 Tools -vs- Utility
https://www.garlic.com/~lynn/2004b.html#33 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004f.html#11 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005g.html#52 Software for IBM 360/30

random past posts mentioning having redone os/360 stage2 sysgen:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2001d.html#48 VTOC position
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#51 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004k.html#41 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2005b.html#41 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005h.html#6 Software for IBM 360/30 (was Re: DOS/360: Forty years)
https://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
https://www.garlic.com/~lynn/2005n.html#40 You might be a mainframer if... :-) V3.8
https://www.garlic.com/~lynn/2005o.html#12 30 Years and still counting
https://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#38 IEH/IEB/... names?

phishing web sites using self-signed certs

From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: phishing web sites using self-signed certs
Date: Fri, 04 Nov 2005 04:48:54 -0800
Julien Pierre wrote:
The point I was trying to make is that there is not one unique correct certificate. The fact that the certificate changed or didn't change tells you nothing about its validity.

In fact the certificate could be the same both times, but it could have been revoked by the CA between the two communications, eg. for reason of key compromise, and you could in fact be dealing with a phisher the second time .

The fact is that the information about which certificate was used in a previous communication is not relevant to the problem of authenticating the site the 2nd time around. To provide security, the certificate needs to be fully verified and validated again.


the standard process for receivers/relying-parties for validating digital signatures is that they retrieve the corresponding public key that had been previously loaded into their trusted public key repository.

PKIs, certification authorities, and digital certificates extended this process .... targeted for first time communication between two strangers that had no other method of resolving information about each other.

the certification authority would take information about an applicant (including their public key), validate the information and package the information in a digitally signed message called a digital certificate.

as part of this infrastructure ... using a standard out-of-band process for relying-parties dealing with trusted public keys, the certification authorities would distribute their public keys packaged in self-signed digital certificates. the relying parties would use standard out-of-band process for validating the certification authority's public key (contained in a certificastion authority's self-signed digital certificate) before loaded the certification authority's public key into the relying party's trusted public key repository.

in the future, any time the recipient/relying party received one of these specially formated digitally signed messages called digital certificates, they would retrieve the corresponding public key from their trusted public key repository to verify the certification authoritiy's digital signature on the digital certificate.

this part of the process is idential whether trusted publc keys from normal individuals are invovolved or trusted public keys from trusted certification authorities are involved. in both cases, the subject public keys have been validated by some out-of-band process before loading into the recipient/relying-party trusted public key repository.

the PKI, certification authority paradigm then defined a special cascading process relying on specially formated digitally signed message called a digital certificate ... allowing relying parties to deal with first time communication with a stranger and not having any other real-time or local mechanism for resolving information about the stranger.

from a theoritical digital signature business process ... the out-of-band ability to validate information associated with a self-signed (certificate) message before loading the corresponding public key into the recipient's trusted public key repository ... is no different for certification authority public keys or normal individual public keys.

In digital signature implimentations like PGP, there tends to be a lot more administrative support for relying parties to manage the public keys in their local trusted public key repositories.

TTP and KCM

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: TTP and KCM
Date: Sat, 05 Nov 2005 04:25:31 -0800
Julien Pierre wrote:
If you believe that, you should design and propose a standard protocol that can support KCM and propose it to the IETF . The model of X.509 certs supported by the SSL protocol and RFC3280 cert path validation as it exists today is not compatible with KCM . Implementing KCM with X.509 and SSL (or S/MIME, as in the paper referenced) would be a violation of the standards, as it would cause the application to produce errors in cases the standards state are valid . There are very legit reasons for changing cert and key, such as to increase security if the old key has been compromised, or if you simply want a larger key size because you think the key has become crackable. The KCM model breaks apart in those cases. The X.509 model has revocation to deal with them.

I browsed through the KCM document and IMO it does not solve the problem of man-in-the-middle attacks. TTP is not perfect, but has revocation to recover from mistakes - what does KCM have ?


one could claim that the business process part of KCM is essentially the old business process of registering pin/passwords and then continuing to use them over long periods of time.

that is also the PGP model ... of registering public keys in lieu of pin/password ... and doing digital signature verification with the registered public keys; aka the relying party having a set of registered public keys in their trusted repository of public keys.

note that, the foundataion for all the PKIs ... are in fact based on using this very business process model for establishing their root trusts. every relying party has to have a trusted repository of registered public keys in order for anything to work (even PKI). frequently, in the case of PKI supported processes ... they attempt to restrict the registration of public keys (in relying party trustued repositories) to just those entities called certification authorities.

when relying party trusted repositories are restricted to just the registration of public keys from certification authorities .... then those corresponding public keys are only used to validate digital signatures on special purpose messages called digital certificates. however, as repeatedly pointed out in the past ... the business process of using registered public keys from the relying party's trusted repository of registered public keys to verify digital signatures on messages .... turns out to be the same exact business process that is used whether the digital signatures being verified are applied to specially formated messages called digital certificates ... or any message ... regarldess of the contents.

if the TTP model were to declare that the business process of registerting public keys in the relying party's trusted public key repository was not possible ... then it would also not be possible for the relying party to have certification authority public keys (distributed in self-signed digitgal certificates) available ... and the whole TTP process collapses.

note that x.509 identity certificate standards from the early 90s had some interesting standards issues. to glaring issues were

1) the x.509 identity certificate non-repudiation bit ... which somehow conveyed the magical properties that if a relying party could produce a digital certificate that contained the public key and the non-repudiation bit set ... then it prooved that for any document or message digitally signed by the originator ... also carried with it the properties that the signer had read, understood, agrees, approves, and/or authorizes the contents of the signed object.

2) TTPs were somewhat unsure as to what information, furture relying parties, might find of interest. as a result there was some direction to grossly overload x.509 identity certificates with enormous amounts of personal information. by the mid-90s, there was a increasing realization that x.509 identity certificates grossly overloaded with personal information represented significant privacy and liability issues.

in the case of #2 and the enormous privacy issues represented by x.509 identity certificates ... you found some number of organizations retrenching to relying-party-only certificates ... some number of past collected postings on relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

basically the enormous amounts personal information is moved to some sort of database and replaced in the digital certificate with a database index or lookup value. however, it is trivial to demonstrated that relying-party-only certificates are redundant and superfluous.

in the case of #1 above, there are a variety of issues.

a) fundamentally, digital signature verification can be treated as a form of something you have authentication ... from the 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


there was a direction with x.509 identity certificate standards to mandate that all digitally signed operations required the appending of an x.509 identity certificates. This basically morphed all digital signature authentication operations (even the simplist and most straight-foward authentication) into heavy-weight identification operatons.

b) a nominal digital signature authentication may involve the server transmitting some random data as a kind of challenge (for instance, as countermeasure to replay attacks). the receiver, digitally signs the data and returns the digital signature for authentication. the contents of the challenge is not normally examined by the person doing the digital signing. the digital certificate non-repudiation bit would imply that if the relying-party could produce any digital certificate with the non-repudiation bit set (for the signer's public key) ... then it was proof that the signer had read, understood, agrees, approves, and/or authorizes what was digitally signed (even if the person had not actually examined what was signed).

misc. past posts referencing dual-use attacks (people thinking they are signing random data ... and it turns out to be some valid transaction or contract) and/or non-repudiation attacks (note that the definition of the standards non-repudiation bit has since become significantly depreciated).

misc. past posts on dual-use attacks:
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm21.htm#5 Is there any future for smartcards?
https://www.garlic.com/~lynn/aadsm21.htm#13 Contactless payments and the security challenges
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#21 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack
https://www.garlic.com/~lynn/2005q.html#23 Logon with Digital Siganture (PKI/OCES - or what else they're called)




previous, next index - home