List of Archived Posts

2006 Newsgroup Postings (03/22 - 04/15)

using 3390 mod-9s
using 3390 mod-9s
using 3390 mod-9s
using 3390 mod-9s
using 3390 mod-9s
3380-3390 Conversion - DISAPPOINTMENT
64-bit architectures & 32-bit instructions
The Pankian Metaphor
Wonder why IBM code quality is suffering?
The Pankian Metaphor
The Pankian Metaphor
Anyone remember Mohawk Data Science ?
Barbaras (mini-)rant
Barbaras (mini-)rant
The Pankian Metaphor
trusted certificates and trusted repositories
trusted repositories and trusted transactions
trusted certificates and trusted repositories
how much swap size did you take?
Over my head in a JES exit
Old PCs--environmental hazard
Over my head in a JES exit
A very basic question
Old PCs--environmental hazard
Over my head in a JES exit
Over my head in a JES exit
Old PCs--environmental hazard
Old PCs--environmental hazard
Old PCs--environmental hazard
X.509 and ssh
A very basic question
X.509 and ssh
X.509 and ssh
X.509 and ssh
X.509 and ssh
X.509 and ssh
X.509 and ssh
Over my head in a JES exit
Over my head in a JES exit
X.509 and ssh
A very basic question
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor

using 3390 mod-9s

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using 3390 mod-9s
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 22 Mar 2006 11:44:01 -0700
ref:
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s

part of the caching/electronic store discussions that on in the 70s had to do with global LRU and global caches vis-a-vis local LRU and partitioning.

as an undergraduate in the 60s, i had also done the global LRU stuff for cp67
https://www.garlic.com/~lynn/subtopic.html#wsclock

about the same time, there was some work published in the literature about working sets and local LRU strategies.

in the early 70s, there was an effort by the grenoble science center to implement local LRU strategy for cp67 as per the academic literature. The cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

was running cp67 on 360/67 with 768k of real memory (104 pageable pages after fixed storage requirements) and my global LRU implementation. grenoble had a 360/67 with 1mbyte of real memory (155 pageable pages after fixed storage requirements). Cambridge with global LRU and 104 pageable pages basically supported about twice as many users (75-80) at the same performance and response as Grenoble support with local LRU and 155 pageable pages with 30-35 users.

in the late 70s about the time the disk activity project was gathering extensive filesystem/disk record access traces, which was then being used for various analysis ... including a cache simulator looking at various kinds of device, controller, channel, subsystem, and system caching strategies. except for some specific scenarios (device-level full-track buffer as compensation for rotational synchronization and things like RPS-miss) ... the modeling found that for any given, fixed amount of electronic storage (and all other things being equal), a single "system level" cache implementation always out-performed any cache partitioning strategy (i.e. some part of the electronic storage partitioned into device, controller, channel, etc caches).

About the same time (late 70s) there was a big fuss being made over a stanford phd effort that was basically covering the stuff that I had done as an undergraduate in the 60s. This stanford phd thesis was doing global LRU ... and some of the people that had been indoctrinated by the local LRU papers in the academic literature were objecting to the Phd being awarded (since global LRU conflicted with their LRU beliefs).

I was somewhat able to contribute to resolving the disagreement since Grenoble had published ACM article on their local LRU effort (in the early 70s) ... and I had hardcopy of some of their supporting performance data ... as well as similar data from Cambridge system doing local LRU (for apples to apples comparison of the two strategies running same system, same hardware, and similar workload).

in any case there is direct correspondence between the partitioning that occurs in local LRU cache strategies and the physical partitioning that occurs in device and/or controller level caches.

a couple posts that reference the lru/clock thesis and related controversy
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names

misc. other posts mentiong the lru/clock countroversy
https://www.garlic.com/~lynn/98.html#2 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002k.html#63 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#55 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#0 Alpha performance, why?
https://www.garlic.com/~lynn/2003k.html#8 z VM 4.3
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#35 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer

using 3390 mod-9s

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using 3390 mod-9s
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 22 Mar 2006 18:29:35 -0700
Joe Morris writes:
By any chance were you the author (or inspiration) for the articles about dataset and PDS member placement that were published in the Installation Newsletter? (And is my memory correct that the article was in the white-paper part of the INL which wasn't supposed to be seen by mere customers?)

ref:
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s

i was an undergraduate working for the univ. at the time that i was doing the os/360 and cp67 work. I don't remember paying any attention to ibm documents on the subject in that time-frame.

the later event was when the disk division got upset about my characterizing disk relative system thruput had declined by a factor of ten times and instructed the division performance group to refute the statement. they eventually came out and said i had slightly understated the situation.

specific reference in previus post
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)

the study was eventually turned into a presentation by the disk division delivered to Guide and Share user group meetings in the '84 time-frame (some extracts from the presentation)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)

I didn't pay a whole lot of attention to where else it may have been published.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

using 3390 mod-9s

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using 3390 mod-9s
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 22 Mar 2006 18:45:36 -0700
Brian Inglis writes:
The CMS approach of keeping the active disk and file tables in local VM memory seemed to keep response times way down for file ops. Using shared segs for fairly static disk directories: S, Y, P, and L (local) helped by moving more I/O from the filesystem to paging. Used to just EXECIO * DISKR files onto the program stack or later into REXX arrays/stem variables, unless they were very large.

ref:
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s

I had done a lot of virtual memory (internal) enhancements on cp/67 and moved them over to vm/370 (again internal). some of those enhancements were eventually picked up and shipped in the product ... like a small subset of the stuff around what they called DCSS. It did also included a "paged mapped" filesystem infrastructure for CMS ... which would have greatly enhanced the sharing and caching of CMS information. misc. collected posts discussion page mapped stuff
https://www.garlic.com/~lynn/submain.html#mmap

the pagged map transition apparently was to great of a paradigm change (although I had done quite a bit of work on providing compatible semantics for existing operation) ... and never shipped. given the full page mapped semantics, sharing and caching of all cms stuff becomes trivially straight-forward.

CMS was otherwise a significantly file intensive infrastructure (especially by pc standards) which was only partially mitigated by things like placing small amounts of high-use CMS stuff in (DCSS) shared segments.

This became extremely painfully apparent with XT/370 ... in was single user operation in ibm/pc frame with all vm and cms stuff mapped to standard ibm/xt 110ms (access) disks. None of the sharing stuff held any benefit ... since you were the only one on the system (and the available storage was also painfully limited by mainframe standards).

some past discussion of xt/370
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
https://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#13 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2005f.html#6 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

using 3390 mod-9s

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using 3390 mod-9s
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 23 Mar 2006 12:18:47 -0700
Anne & Lynn Wheeler wrote:
the study was eventually turned into a presentation by the disk division delivered to Guide and Share user group meetings in the '84 time-frame (some extracts from the presentation)
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)


the above is an old posting of the preface to the share presentation. the title page for that share presentation is

SHARE 63 Presentation B874

DASD Performance Review
8:30 August 16, 1984
Dr. Peter Lazarus

IBM Tie Line 543-3811
Area Code 408-463-3811
GPD Performance Evaluation
Department D18
Santa Teresa Laboratory
555 Bailey Avenue
San Jose, CA., 95150

========================

for a little more drift, STL had told me that even if i provided them with fully tested and integrated FBA support for MVS ... that it would still cost $26m to ship ... and I didn't have an ROI business case for that $26m (i.e. it would just sell the same amount of FBA disk in place of CKD disk ... so no really new revenue).

The transition to FBA would have (at least) converted the I/O extravagant multi-track search paradigm for VTOC and PDS directories to data structures that were loaded (and cached) in memory for search/lookup. The multi-track search paradigm from the mid-60s that represented a trade-off in relatively abundant I/O resources for relatively scarce real storage resources ... was no longer valid even ten years later.

OS/360 did get partial mitigation for the effects of multi-track search (and loading) with RAM and BLDL lists.

past postings in this thread:
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#1 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s

misc. past posts about STL quoting $26m additional to ship MVS FBA support (even after for already fully integrated and tested).
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2005c.html#64 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2005m.html#40 capacity of largest drive
https://www.garlic.com/~lynn/2005u.html#21 3390-81

using 3390 mod-9s

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using 3390 mod-9s
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 24 Mar 2006 08:45:08 -0700
Anne & Lynn Wheeler wrote:
for a little more drift, STL had told me that even if i provided them with fully tested and integrated FBA support for MVS ... that it would still cost $26m to ship ... and I didn't have an ROI business case for that $26m (i.e. it would just sell the same amount of FBA disk in place of CKD disk ... so no really new revenue).

The transition to FBA would have (at least) converted the I/O extravagant multi-track search paradigm for VTOC and PDS directories to data structures that were loaded (and cached) in memory for search/lookup. The multi-track search paradigm from the mid-60s that represented a trade-off in relatively abundant I/O resources for relatively scarce real storage resources ... was no longer valid even ten years later.


ref:
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

while I couldn't prove that adding FBA support to MVS would sell more DASD ... I did try and make the case that eliminating CKD and migrating to FBA would eliminate enormous amounts of infrastructure costs over the years ... like all the ongoing infrastructure gorp related to track lengths (aside from the issue of eliminating the enormous performance degradation resulting in continuing to carry the multi-track search paradigm down thru the ages).

3380-3390 Conversion - DISAPPOINTMENT

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 3380-3390 Conversion - DISAPPOINTMENT
Date: Sun, 26 Mar 2006 08:33:21 -0700
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers, bit.listserv.vmesa-l
DASDBill2 wrote:
I once wrote a deblocker program to read in 640K tape blocks and break them up into QSAM-friendly chunks of 32,760 bytes or less. It was an interesting exercise, made even more so by having to run it on MVS under VM, which caused a lot of unrepeatable chaining check errors due to the very long channel program to read in 640K in one I/O request.

cp/67 on 360/67 had to translate ccws from the virtual machine ... and use data-chaining where the virtual machine CCW virtual data address was contiguous ... but the virtual pages from the virtual machine were scattered around (real) memory.

moving to 370, IDALs were provided in lieu of data-chaining to break up virtual contiguous areas into non-contiguous pages. part of this is that the standard channel architecture precluded pre-fetching CCWs (they had to be fetched and executed synchronously). on 360, breaking a single ccw into multiple (data-chaining) CCWs introduced additional latencies that could result in timing errors. on 370, non-contiguous areas could be handled with IDALs ... and channel architecture allowed prefetching of IDALs ... supposedly eliminating the timing latencies associated that could happen with data-chaining approach.

This issue of channels working with real addresses necessitating that CCWs built with virtual address ... to be translated to a shadow set of CCWs with real addresses ... affects all systems operating with virtual memory (which support applications building CCWs with virtual memory addresses ... where the virtual address space area may appear linear ... but the corresponding virtual pages are actually non-contiguous in real memory).

The original implementation of os/vs2 was built using standard MVT with virtual address space tables and page interrupt handler hacked into the side (for os/vs2 svs ... precursor to os/vs2 mvs ... since shorten to just mvs). It also borrowed CCWTRANS from cp/67 to translate the application CCWs (that had been built with virtual addresses) into "real" CCWs that were built with real addresses for real execution.

This version of CCWTRANS had support for IDALs for running on 370s.

Typically, once MVS had shadowed the application CCWs ... creating the shadow CCWs with IDALs giving the non-contiguous page addresses ... then any VM translation of the MVS translated CCWs was strictly one-for-one replacement ... an exact copy of the MVS set of translated CCWs ... which only differed in the real, real address spacified.

all the virtual machine stuff and cp67 had been developed by the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

there was a joint project between cambridge and endicott to simulate virtual 370s under cp67 running on real 360/67 (for one thing the 370 virtual memory tables had somewhat different hardware definition, the control register definitions were somewhat different, there some different instructions, etc).

the base production system at cambridge was referred to as cp67l. the modifications to cp67 to provide 370 virtual machines (as an alternative option to providing 360 virtual machines) was referred to as cp67h. Then further modifications were made to cp67 for the kernel to run on real 370 (using 370 hardware definitions instead of 360 hardware definitions). This cp67 kernel that ran "on" 370 hardware was referred to as cp67i. cp67i was running regularly in production virtual machine a year prior to the first engineering 370 model with virtual memory hardware became available (in fact, cp67i was used as a validation test for the machine when it first became operational).

cms multi-level source update management was developed in support of cp67l/cp67h/cp67i set of updates.

also, the cp67h system ... which could run on a real 360/67, providing virtual 370 machines ... was actually typically run in a virtual 360/67 virtual machine ... under cp67l on the cambridge 360/67. This was in large part because of security concerns since the cambridge system provided some amount of generalized time-sharing to various univ. people & students in the cambridge area (mit, harvard, bu, etc). If cp67h was hosted as the base timesharing service ... there were all sorts of people that might trip over the unannounced 370 virtual memory operation.

about the time some 370s (145) processors (with virtual memory) became available internally (still long before announcement of virtual memory for 370), a couple engineers came out from san jose and added the device support for 3330s and 2505s to the cp67i system (including multi-exposure support, set sector in support of rps). also idal support was crafted into CCWTRANs. part of the issue was that the channels on real 360/67s were a lot faster and had lot lower latency ... so there were much fewer instances where breaking a single CCW into multiple data-chained CCWs resulted in overruns. However, 145 channel processing was much slower and required (prefetch'able) IDALs to avoid a lot of the overrun situations.

a few past posts mentioning cp67l, cp67h, and cp67i activity:
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?
https://www.garlic.com/~lynn/2005j.html#50 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches

misc. past posts mentioning cp/67 CCWTRAN
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#0 IBM 360 memory
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004m.html#16 computer industry scenairo before the invention of the PC?
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#23 360 DIAGNOSE
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005b.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2005q.html#41 Instruction Set Enhancement Idea
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces

64-bit architectures & 32-bit instructions

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: 64-bit architectures & 32-bit instructions
Date: Sun, Mar 26 2006 12:09 pm
Newsgroups: comp.arch
Eugene Miya wrote:
It's so wrong, I emailed Hennessy (who used to lurk here in the 80s) to take a look, since that page credits MIPS (his firm) with the first 64-bit architecture (no mention of Cray, CDC, IBM before 1991, etc. etc. and whatever features they used).

So if you use Wikipedia or grade papers on architecture on this topic be forewarned. That page had so much wrong with it, it just wasn't worth starting to fix (people with more 64-bit experience than me should do that). This came up in part as a topic also in comp.sys.unisys which also has wikipedia problems.

Gray is on my side with this one.


last time i looked at power/pc entries on wikipedia ... it also had incorrect stuff (although checking it just now ... it appears to have been redone) ... reference to comp.arch post last year mentioning some amount of power/pc description was jumbled:
https://www.garlic.com/~lynn/2005q.html#40 Intel strikes back with a parallel x86 design

the above post makes mention that the executive we reported to when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

moved over to head up somerset (when it was formed). then sometime in 93, he left to be president of MIPS.

The Pankian Metaphor

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, Mar 26 2006 6:59 pm
greymaus wrote:
My original message was inspired by the best computer (sorta) film ever, "War Games", and the two geeks in the back office, nice enough guys (They gave the young hero good advice), but no social graces.

i believe the car ferry in the movie was suppose to be off the coast of oregon ... however it was the old steilicom ferry that ran between the mainland, mcneil island and anderson island (south of tacoma ... just off ft. lewis). the ferry was later converted to tourist boat that makes the round on lake washington ... out of kirkland

misc. past posts mentioing war games
https://www.garlic.com/~lynn/2000d.html#39 Future hacks [was Re: RS/6000 ]
https://www.garlic.com/~lynn/2001m.html#52 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2002b.html#38 "war-dialing" etymology?
https://www.garlic.com/~lynn/2003g.html#56 OT What movies have taught us about Computers
https://www.garlic.com/~lynn/2004p.html#40 Computers in movies

Wonder why IBM code quality is suffering?

From: lynn@garlic.com
Subject: Re: Wonder why IBM code quality is suffering?
Date: Mon, 27 Mar 2006 07:26:47 -0800
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
rtsujimoto_consultant@ibm-main.lst wrote:
No, PL/C was Cornell University's student PL/1 compiler. (I remember it, too; Waterloo had it as one of their batch compilers, as did ISU and many other colleges and universities around the world.)

The PL/X genealogy included PL/S and PL/AS, but not PL/C.


posting in pl/s, et al thread in this n.g. from a couple years ago
https://www.garlic.com/~lynn/2004g.html#46 PL/? History

wikipedia entry for pl/c
https://en.wikipedia.org/wiki/PL/C

another pl/i subset was pl.8 developed as part of 801/risc project. cp.r was written in pl.8. misc. posts mentioning 801, pl.8, cp.r, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

and for some drift, a recent post mentioning wikipedia and power/pc
https://www.garlic.com/~lynn/2006f.html#6 64-bit architectures & 32-bit instructions

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: The Pankian Metaphor
Date: Tues, Mar 28 2006 10:05 am
Newsgroups: alt.lang.asm, alt.folklore.computers
jmfbahciv writes:
I read those. I'm trying to learn about miraculously piece of it. I tried to find the Marshall Plan; it doesn't exist. I had assumed that a plan implies a written project plan. It was a speech and somehow what the guy said got translated into real actions. I want to learn how this translation happens. The work done to do this is at the heart of how people run countries, including the very mysterious thingie called foreign policy.

try a little of this ... article on ebrd, eca, erp, marshall plan
http://www.ciaonet.org/olj/iai/iai_98haj01.html

from above
The Economic Cooperation Act of 1948 authorising the ERP called for a viable, "healthy [West European] economy independent of extraordinary outside assistance". US bilateral agreements with 15 participating governments set four basic tasks:

• a strong production effort • expansion of foreign trade * restoration of internal financial stability • intra-European co-operation.


... snip ...

old topic drift posting regarding some of the period
https://www.garlic.com/~lynn/2004e.html#19 Message To America's Students: The War, The Draft, Your Future

more recent post in the same drfit:
https://www.garlic.com/~lynn/2006b.html#27 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#33 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#27 Mount DASD as read-only

The Pankian Metaphor

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: The Pankian Metaphor
Date: Tues, Mar 28 2006 2:46 pm
Newsgroups: alt.lang.asm, alt.folklore.computers
lynn@garlic.com wrote:
try a little of this ... article on ebrd, eca, erp, marshall plan
http://www.ciaonet.org/olj/iai/iai_98haj01.html


ref:
https://www.garlic.com/~lynn/2006f.html#9 The Pankian Metaphor

i did search engine on EBRD (current marshall plan) and "Marshall Plan" and one of the results was the above URL ... clicking on the above URL from the search engine web page brings it up ... but entering the above URL or clicking on the URL from some other page ... gets me a request to enter userid/password to access the information.

so here is search engine query that picks up the above URL (among several)
http://www.google.com/search?num=100&hl=en&as_qdr=all&q=ebrd+%22marshall+plan%22&spell=1
http://search.yahoo.com/search?fr=FP-pull-web-t&p=ebrd+%22marshall+plan%22

Anyone remember Mohawk Data Science ?

From: lynn@garlic.com
Subject: Re: Anyone remember Mohawk Data Science ?
Date: Wed, Mar 29 2006 10:30 am
Newsgroups: alt.folklore.computers
Charles Richmond wrote:
I worked on the Harris 800 and 1200, which were 24-bit successors to the Harris 500. I worked mainly with their FORTRAN-77 compiler, which was done *extremely* well. It did a great job of optomizing the code. I also worked with their C compiler which was also good. All this was circa 1984 to 1988.

after virtual memory was shipping for 370 ... I did some customer marketing calls for vm/cms on 370/145 at various universities, in some cases in competitive situation against harris and xds

370 virtual memory announce and ship:
ann. ship IBM S/370 VS 72-08 73-08 12 VIRTUAL STORAGE ARCHITECTURE FOR S/370

from old posting
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc

harris, datacraft, interdata, etc. timeline
http://www.ccur.com/corp_companyhistory.asp?h=1
http://ed-thelen.org/comp-hist/GBell-minicomputer-list.html

and a sds/xds ref.
http://www.andrews.edu/~calkins/profess/SDSigma7.htm

for some drift, earlier at the univ. in the late 60s, we initially used an interdata/3 as basis for creating mainframe telecommunication controller clone
https://www.garlic.com/~lynn/submain.html#360pcm

Barbaras (mini-)rant

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Subject: Re: Barbaras (mini-)rant
Date: Wed, 29 Mar 2006 14:02:22 -0800
Dave Cartwright wrote:
I'm with Jim on this. I was a contractor in the mid to late '90's and came across the early TCPIP stack, written in PASCAL and ported from VM. As I recall it performed OK, and had some quite advanced features like VIPA which was the subject of another recent thread. The Cisco man at the scottish bank I worked was quite impressed.

the base vs/pascal implementation on vm used approx. 3090 processor getting 44kbytes/sec thruput.

i added rfc 1044 support to the base ... and in some tuning at cray research was getting sustained 1mbyte/sec between a 4341-clone and a cray ... using only modest amount of the 4341-clone cpu (around 25 times the data rate for a small fraction of cpu).

misc. past posts mentioning adding rfc 1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

slight drift ... my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

other posts on high-speed data transport project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

we were operating a high-speed backbone ... but were not allowed to actually bid on nsfnet1 (original internet backbone) ... however, we did get a technical audit by NSF that said what we had running was at least five years ahead of all nsfnet1 bid submissions.

i was asked to be the red team for nsfnet2 bid ... there were something like 20 people from seven labs around the world that were the blue team (although only the blue team proposal was actually allowed to be submitted). minor past refs:
https://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2003c.html#46 difference between itanium and alpha
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004l.html#1 Xah Lee's Unixism
https://www.garlic.com/~lynn/2005d.html#13 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005u.html#53 OSI model and an interview
https://www.garlic.com/~lynn/2006e.html#38 The Pankian Metaphor

part of old reference from 1988 (although not of the added rfc 1044 support)
TITLE IBM TCP/IP FOR VM (TM) RELEASE 1 MODIFICATION LEVEL 2 WITH
ADDITIONAL FUNCTION AND NEW NETWORK FILE SYSTEM FEATURE

ABSTRACT IBM announces Transmission Control Protocol/Internet Protocol (TCP/IP) for VM (5798-FAL) Release 1 Modification Level 2. Release 1.2 contains functional enhancements and a new optional Network File System (NFS) (1) feature. VM systems with the NFS feature installed may act as a file server for AIX (TM) 2.2, UNIX (2) and other systems with the NFS 3.2 client function installed. Additional functional enhancements in Release 1.2 include: support for 9370 X.25 Communications Subsystem, X Window System (3) client function, the ability to use an SNA network to link two TCP/IP networks, and a remote execution daemon (server).


Barbaras (mini-)rant

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Barbaras (mini-)rant
Date: Wed, 29 Mar 2006 16:13:48 -0800
there is also the folklore of the contractor hired to do the original tcp/ip implementation in vtam. the initial try had tcp benchmark w/thruput much higher than lu6.2. it was explained to him that everybody KNEW that a CORRECT tcp/ip implementation would have thruput much lower than lu6.2 ... and they were only willing to accept a CORRECT protocol implementation. the contract was handled by a group that was sometimes called cpd-west located in palo alto sq (corner of el camino and page mill).

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Subject: Re: The Pankian Metaphor
Date: Sat, 01 Apr 2006 16:38:43 -0800
Larry Elmore wrote:
I'm pretty certain that before WWII, all German tanks were made in Germany. The Pzkw-I, their first post-WWI tank, was built by Henschel. The only possible exception to that would be the Czech tanks that were seized in 1939 when the rest of Czechoslovakia was swallowed up. That's a result of conquest, though, not business. The Soviet Union cooperated with Germany in military development, allowing German training in return for technology transfer, but they didn't build anything for the Germans.

supposedly radio communication during blitzkrieg help modernize manuver warfare. Boyd also quotes Guderian as verbal orders only. the scenario is that war never goes perfectly and there are some class of people, that after a battle, want to lay blame for what they may perceive as mistakes. Guderian wanted the officer on the spot to make the best decision possible w/o having to constantly having to worry about defending themselve later (this was also the theory that the professional soldier at the lowest possible level should be making the decisions). misc. past posts mentioning Boyd's observations about Guderian's verbal orders only.
https://www.garlic.com/~lynn/99.html#120 atomic History
https://www.garlic.com/~lynn/2001.html#29 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001m.html#16 mainframe question
https://www.garlic.com/~lynn/2002d.html#36 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#38 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference
https://www.garlic.com/~lynn/2003h.html#51 employee motivation & executive compensation
https://www.garlic.com/~lynn/2003p.html#27 The BASIC Variations
https://www.garlic.com/~lynn/2004k.html#24 Timeless Classics of Software Engineering
https://www.garlic.com/~lynn/2004q.html#86 Organizations with two or more Managers

old post that had a number of WWII tank references
https://www.garlic.com/~lynn/2000c.html#85 V-Man's Patton Quote (LONG) (Pronafity)
but the URLs appear to have gone 404 or otherwise broken in one way or another.

so a quicky search for a few tank URLs
The Best Army Tanks of World War II
http://www.chuckhawks.com/best_tanks_WWII.htm
The main American tank in World War 2 won by numbers:
http://www.2worldwar2.com/sherman.htm
The Tiger1 in action:
http://www.fprado.com/armorsite/tiger1_in_action.htm
The German Tiger Tank was introduced in August 1942 and was at that time the most powerful tank in the world. The success of the Tiger was so profound, that no allied tank dared to engage it in open combat. This psychological fear soon became to be known as "Tigerphobia".
http://www.worldwar2aces.com/tiger-tank/
The rule of thumb was that it took at least five American M4 Sherman medium tanks to knock out a cornered Tiger.
http://www.fprado.com/armorsite/tiger1-02.htm
At times it took 10 Sherman to kill one Panther or Tiger tank.
http://experts.about.com/q/Military-History-669/World-War-2-Tanks.htm
Even the Panzer IV, the weakest of its opponents, had a more powerful gun. Against the Panther and the Tiger, the Sherman was hopelessly outclassed.
http://www.geocities.com/Pentagon/Quarters/1695/Text/sherman.html


Boyd contrasts the blitzkrieg with the US entry into WWII .... having to throw in massive numbers of personal with little or no experience. The approach was to create an extremely heavy, top-down decision making operation that allowed for little or no local automity. He even mentions that this is possible reason for the extremely top heavy bueracrcies common in American corporations during the 70s and 80s. The junior officers that got their training on how to run large operations during WWII were coming into their own as senior corporate executives ... and the way you ran a large corporation was assume that there was little or no expertise in the company and everything had to be rigidly controlled from the top.

Boyd also pointed out that it was a conscious decision during WWII about the Sherman. The Tigar was recognized as having something like a 5:1 to 10:1 kill ratio over the Sherman ... but it was possible to turn out massive numbers of Shermans and still win via superior numbers (logistics and attrition ... modulo possibly something of a morale issue among Sherman crews being used for cannon fodder). misc. past posts mentioning Boyd's observations about Sherman vis-a-vis Tiger.
https://www.garlic.com/~lynn/2001.html#30 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001m.html#11 mainframe question
https://www.garlic.com/~lynn/2002d.html#1 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2003n.html#27 Controversial paper - Good response article on ZDNet
https://www.garlic.com/~lynn/2004b.html#24 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2005d.html#45 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005j.html#11 The 8008
https://www.garlic.com/~lynn/2005j.html#14 The 8008

general collection of past posts mentioning Boyd:
https://www.garlic.com/~lynn/subboyd.html#boyd
misc. other URLs from around the web mentioning Boyd:
https://www.garlic.com/~lynn/subboyd.html#boyd2

trusted certificates and trusted repositories

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: trusted certificates and trusted repositories
Newsgroups: sci.crypt
Date: Tue, 04 Apr 2006 08:55:09 -0600
public key related information have certification processes defined, which can either be done by individuals directly or possibly by some trusted third party ("TTP") or other institution responsible for the actual information (in the "TTP" scenario, the institution performing the certification process has frequently not been the institution that was actually responsible for the accuracy of the information; the "TTP" would just certify that they had checked with the agency that was actually responsible for the information).

the results of the certification process has been the loading of the certified information into a trusted repository. the trusted repository could be purely private ... somewhat like an individual's pgp repository; or possibly some public, online trusted repository. Another example is the trusted repository of certification authority public keys shipped in some number of products ... especially browsers associated with the SSL process.

there was also the scenario that for an offline, disconnected world, a requirement for certified information (an analog to the old time letters of credit/introduction from the sailing ship days ... and even earlier). these were typically read-only copies of some public key related certified information (that typically resides in some trusted repository), which were "armored" (with digital signature technology) for survival in the wild (freely floating around). These are called digital certificates.

The requirement for the digital certificate (analog of the old-time, offline letters of credit/introduction), was that the relying party had no means of their own for directly accessing certified information ... so the other communicating party was presenting a form of credential along with the communication (again as opposed to the relying party having their own direct access to such certified information).

The offline era tends to focus on the resistance of the credential/certificate to forgery or counterfeiting (degree of confidence that relying parties could trust the credential/certificate). A similar example is the educational certificates from diploma mills.

The online era tends to focus on the integrity of the certification process and the organization providing the information, typical of online, real-time operation. This moves past the offline era (worried about whether the credentials/certificates could be forged or counterfeited) and moved to what was the meaning of the actual certified information and all aspects of the associated certification process.

there have been a number of IETF RFCs that revolve around definitions for respositories for digital certificates. In the trusted respository scenario, having both trusted respository of the information and the information also being read-only copy armored for survival in the wild (freely floating around), would be redundant and superfluous.

In my (actual) RFC summary entries ... (follow the indicated URL), clicking on the ".txt=nnnn" field, retrieves the actual RFC

https://www.garlic.com/~lynn/rfcidx14.htm#4398

Storing Certificates in the Domain Name System (DNS), Josefsson S., 2006/03/31 (17pp) (.txt=35652) (Obsoletes 2538) (Refs 1034, 1035, 2246, 2247, 2440, 2693, 2822, 3280, 3281, 3548, 3851, 3986, 4025, 4033, 4034, 4301) (SC-DNS) (was draft-ietf-dnsext-rfc2538bis-09.txt)

https://www.garlic.com/~lynn/rfcidx14.htm#4387

Internet X.509 Public Key Infrastructure Operational Protocols: Certificate Store Access via HTTP, Gutmann P., 2006/02/07 (25pp) (.txt=63182) (Refs 2440, 2585, 2616, 2782, 2854, 3156, 3205, 3275, 3280, 3390, 3852, 3875) (was draft-ietf-pkix-certstore-http-09.txt)

https://www.garlic.com/~lynn/rfcidx14.htm#4386

Internet X.509 Public Key Infrastructure Repository Locator Service, Boeyen S., Hallam-Baker P., 2006/02/03 (6pp) (.txt=11330) (Refs 2559, 2560, 2585, 2782) (was draft-ietf-pkix-pkixrep-04.txt)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

trusted repositories and trusted transactions

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: trusted repositories and trusted transactions
Newsgroups: sci.crypt
Date: Tue, 04 Apr 2006 09:52:41 -0600
ref:
https://www.garlic.com/~lynn/2006f.html#15 trusted certificates and trusted respositories

part of the x.509 identity certificate stuff from the early 90s was some forces trying to increase the perceived value of the certificate by grossly overloading it with every possible piece of personal information. the counterforce that started to show up by the mid-90s was the realization that x.509 identity certificate grossly overloaded with personal information represented significant privacy (and even liability) issues.

as a result there was some retrenchment in the mid-90s to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

these were certificates that contained a public key and a record locator for a repository of information that wasn't generally publicly available. the necessary personal information needed by the relying party was in this trusted repository (and not publicly available). however, it almost every transaction oriented scenario, it was trivial to demonstrate that the 1) actual transaction also carried the record location (like an account number) and since this was a relying-party-only certificate, the relying-party already had a copy of the public key (having certified and issued the original digital certificate). As a result, it was trivial to demonatrate that the existance of any digital certificate was redundant and superfluous.

the other issue with the trusted transaction scenario is that it tends to be extremely focused on some actual business process. rather than a paradigm of static information public repository, a trusted transaction can take into account many factors, like real-time aggregated information. for a financial example, a trusted repository of stale, static information might not only have the account number, but the actual account record including past transactions and possibly the account balance at some point in time. rather than a merchant sending off a request for copy of your digital certificate (certicate respostiory) or a copy of your account record (trusted respository, with all possible account related personal information, including current account balanace); the merchant can send off a digitally signed transaction asking if a operation for a specific amount is approved or not. The yes/no response divulges minimum amount of personal information.

so trusted certificate/credential model tends to be a left-over from the offline world where relying parties didn't have their own information regarding the matter at hand and also didn't have access to a trusted party (as source of the information). offline credential/certificate paradigm tended to be pre-occupied with forgeriess of the certificate/credential.

the trusted repository model tends to move more into a more modern online era. however, it has tended to have more generalized collections of information ... not necessarily being able to predict what infomration relying parties might actually be in need of. however, especially when individuals have been involved, trusted repositories of general information have tended to represent unnecessary disclosier of sensitive and/or personal information.

the trusted transaction model has tended to be much more valuable operation, in part because dynamic operations associated with real business processes can be taken into consideration. at the same time the transactions are frequently designed to make minimal unnecessary disclosier of sensitive and/or personal information.

my joke left-over from the mid=90s was about various suggestions that the financial/payment infrastructure be moved into the modern era by attaching (relying-party-only) digital certificates to every transaction. my reply was that the offline paradigm digital certificate would actually be setting back the financial/payment infrastructure to pre-70s days before online, realtime transactions.

it was possible to have digitally signed financial transactions for purely authentication purposes. the responsible financial institution could validate the digital signature with the onfile public key ... and any digital certificate was purely redundant and superfluous. the online transaction could come back approved or declined ... which is much more valuable to merchants ... than a credential certifying that at some point in the past you had opened a bank account.

the x9.59 financial standard scenario
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

even required that account numbers used in x9.59 transactions could not be used in non-authenticated transactions. this had the characteristic that skimming/harvesting/data-breach threats against account numbers for fraudulent transactions was eliminated. all the data-breaches in the world against files containing account numbers wouldn't enable a crook to take just the account number and use it in a fraudulent transaction.

recent posts wandering into the skimming/harvesting/data-breach topic:
https://www.garlic.com/~lynn/aadsm22.htm#21 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#23 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#26 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#29 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm22.htm#30 Creativity and security
https://www.garlic.com/~lynn/aadsm22.htm#31 Creativity and security
https://www.garlic.com/~lynn/aadsm22.htm#33 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm22.htm#34 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/2006e.html#2 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006e.html#3 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006e.html#4 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006e.html#10 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#21 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#24 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#26 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#30 Debit Cards HACKED now
https://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

trusted certificates and trusted repositories

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: trusted certificates and trusted repositories
Newsgroups: sci.crypt
Date: Tue, 04 Apr 2006 11:50:59 -0600
Anne & Lynn Wheeler writes:
The offline era tends to focus on the resistance of the credential/certificates to forgery or counterfeiting (degree of confidence that relying parties could trust the credential/certificate). A similar example is the educational certificates from diploma mills.

one might even be tempted to claim that pre-occupation with the value of SSL certificates has obfuscated the certification operations that the certificates are suppose to represent (aka is the certification the thing of value or does the certificate, which is supposedly just one way of representing that certification, have its own value unrelated to the certified information). this may have contributed to situations where consumers are perfectly comfortable with websites that have valid SSL certificates ... even tho the websites may have been created for purely fraudulent objectives.

misc. past posts on ssl certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

ref:
https://www.garlic.com/~lynn/2006f.html#15 trusted certificates and trusted repositories
https://www.garlic.com/~lynn/2006f.html#16 trusted repositories and trusted transactions

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

how much swap size did you take?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how much swap size did you take?
Newsgroups: alt.os.linux.redhat
Date: Wed, 05 Apr 2006 09:40:22 -0600
virtual memory is used to contain executing processes. using paged/swapped virtual memory it is possible to have more executing processes than can fit into real storage. the virtual memory hardware is used to indicate what pieces are actually in real memory (and where they are located).

SWAP/paging space is used to contain the virtual memory that isn't being currently used and has been moved out of real storage to make room available for other pieces of virtual memory (either for the same process or other processes).

the amount of SWAP/page space needed on disk is dependent on how much total virtual memory is required by your active running processes.

this is slightly affected by whether the page replacement algorithm is following a "DUP" or "NO-DUP" strategy (duplicate or no-duplicate strategy).

as an undergraduate in the late 60s I did a lot of paging algorithm stuff. I created lazy allocate ... i.e. on initial use by a process, a virtual page was allocated in real memory ... but it wasn't required to allocate a corresponding page on disk (at that time). It was only when a virtual page was selected for replacement the first time ... that a page on disk was initially alloccated.

if the total requirement by all processes for virtual memory was containable by available real storage ... then pages would never be selected for replacement ... and there was no requirement for swap/page space on disk.

however, at that time, i used a "duplicate" stragegy. this was when a virtual page was on disk ... and some process required it. a page was allocated in real memory (potentially first removing some other virtual page to disk), the requested virtual page read in and started being used. however, the copy on disk was left allocated. In this strategy, there were two allocated copies of the page ... one in real storage and one on disk. In a scenario where there was demand for real storage, much larger than available ... eventually all virtual pages will have been replaced at one time or another. In this scenario, eventually all virtual pages will have an allocated copy on disk (even those also having a copy in real memory ... therefor the "duplicate" reference). This "duplicate" allocation scenario requires that amount of secondary SWAP/page allocation on disk is equal to the mazimum amount of virtual memory that may concurrently required by all processes that you might have running at the same time (which is dependent on the workload that you happen to be running on the machine).

later in the 70s, as real memory started becoming much larger ... i started seeing some configurations where the amount of real storage (and therefor the number of potential "duplicates") were approaching a significant percentage of possible available SWAP/page disk space. This was especially becoming true of those configurations with fix-head disks and/or electronic disks being used for SWAP/page.

for these configurations i added a "no-duplicate" implementation and some code that could dynamically switch between "duplicate" strategy and "no-duplicate" strategy. In the "no-duplicate" strategy, whenever a virtual page was brought in from disk, the corresponding disk space was de-allocated (therefor there wasn't a duplicate page both in real memory and on disk).

For a configuration with heavy SWAP/page use and using a "duplicate" strategy ... the amount of SWAP/page space tends to be equal to the maximum virtual memory required by all concurrently running processes. In the "no-duplicate" strategy the amount of SWAP/page plus real memory needs to be equal to the maximum virtual memory required by all concurrently running processes, i.e. in a no-duplicate strategy, the amount of SWAP/page space can be reduced by the amount of real storage). The "no-duplicate" calculation real-storage reduciton is modulo the amount of real-storage required for fixed kernel requirements and file caching (which may be around 1/3rd of real storage for some configurations).

misc. past postings mentioning dup & no-dup strategies
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002f.html#26 Blade architectures
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Over my head in a JES exit

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Over my head in a JES exit
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 10 Apr 2006 09:22:17 -0600
Efinnell15@ibm-main.lst wrote:
The story I heard was they liked ASP, but it was too piggy so a furious rewrite was undertaken and it became Half ASP. Most of the design objectives were met. When they went to present, it was deemed unsophisticated and changed to Houston ASP.

Local Houston branch office group put out HASP type-III for some time before their was an official support formed in gburg and many of the people moved there ... and the name change to JES2

ASP was two-processor loosly-coupled system ... although there was a flavor called LASP (single processor) ... local-ASP somewhat to compete with HASP.

both HASP and ASP come out of the field since the original "spooling" built into the base product had significant issues.

My wife served a stint in the gburg group ... and was one of the technology catchers for ASP ... when it was also moved to the gburg group and renamed JES3. She also did a detailed technology and market analysis of JES2 and JES3 for a proposal for a merged product (maintaining the major important customer requirements from each in the merged product). However, there was some amount of discord for that to ever succeed. She did get quite a bit of experience regarding loosely-coupled operation and was eventually con'ed into serving a stint in POK in charge of loosely-couple architecture. She developed Peer-Coupled Shared Data architecture ... that also saw quite a bit of resistance. Except for work by the IMS hot-standby effort, it didn't see much uptake until parallel sysplex
https://www.garlic.com/~lynn/submain.html#shareddata

lots of past postings mentioning HASP
https://www.garlic.com/~lynn/submain.html#hasp

I can almost see the cover of my old HASP documentation ... but I can't remember the HASP type III program product number. However, search engines are your friend; an old JES2 posting from 1992 giving some of the original history and the type-III program product number:
http://listserv.biu.ac.il/cgi-bin/wa?A2=ind9204&L=jes2-l&T=0&P=476

following take from above:
"It was originally written in Houston in 1967 in support of the Appolo manned spacecraft program. Subsequently it was distributed as a Type III program, picked up a small number of users but through the years the number of users have increased fairly substantially. Further more there is a continuing demand today for HASP. The growth that we have accomplished has not been without problems. In 1968 IBM decided that in release 15/16 since we had readers and writers we no longer had a requirement for HASP. IBM came down very hard on the users and said we were'nt going to have a HASP. The users resisted and HASP exists today."

Any typos in the above are mine. My Contributed Program Library document for HOUSTON AUTOMATIC SPOOLING PRIORITY SYSTEM 360D 05.1.007 is dated August 15, 1967. The authors listed are: Tom H. Simpson, Robert P. Crabtree, Wayne F. Borgers, Clinton H. Long, and Watson M. Correr.


... snip ...

One of the internal problems with the JES2 NJE network was it traditional networking paradigm and severe limitations. Quite a bit of the original code still carried the "TUCC" label out in cols. 68-71 of the source. It had intermixed networking related fields in with all the job control header fields. It had also scavange the HASP 256 psuedo device table for network node identifiers; as a result a typical installation might have 60-80 psuedo devices leaving only 180-200 entries for network node definitions. By the time JES2 NJE was announced, the internal network was already well over 256 nodes.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

The lack of strongly defined layers and intermixing networking and job information severely compromised a lot of the internal network. Minor changes to header format from release to release would result in network file incompatibilities. Some machines somewhere in the world, upgrading to a newer release before all the other machines in the world simultaneously upgrading to the same release. There is the well-known legend of the JES2 systems in San Jose being upgraded which resulted in MVS systems in Hursley crashing trying to process network files originating from the San Jose system.

the original internal network technology had been developed at the science center (same place that gave you virtual machines, gml, and a lot of interactive computing)
https://www.garlic.com/~lynn/subtopic.html#545tech

and had well defined network layering as well as effectively a form of gateway technology in each of its nodes. I've frequently asserted that heavily contributed to the internal network being larger than the arpanet/internet from just about the beginning until possibly mid-85. Because of the many limitations in NJE ... JES2 nodes tended to be restricted to end-nodes in the internal networks. all the intermediary nodes were the responsible of the networking technology from the science center. When talking to a JES2 system ... a special NJE driver would be started. The ease of having lots of different drivers all simultaneously running in the same node ... also gave rise to the not only having a large library of different NJE drivers ... that corresponded to all the currently deployed JES2 releases. The other technology eventually developed in these drivers was a canonical JES2 header representation. The specific NJE driver directly talking to a real end-node JES2 system would do an on-the-fly translation from the canonical format to the format specifically required by the JES2 system it was talking to (as a countermeasure to JES2 mis-understanding NJE header fields resulting in MVS system crashes).

... for slightly other drift

one of the principle HASP people went to Harrison in westchester country and started a project codenamed RASP. This was sort of a cross between tss/360 and MTS ... reborn and redone from scratch as brand new operating system. tailored for virtual memory environment and not carrying with it all the baggage of real memory heritage ... heavy dependency on pointer passing, application API still real address i/o oriented with heavy dependency on ccw translation. a couple recent posts on ccw translation
https://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT

for various reasons that person eventually left and became an Amdahl "fellow" in Dallas and started a project similar to RASP that was code named Aspen. Along the way there was some legal wrangling regarding whether Aspen was contaminated with RASP code. misc. past posts mentioning RASP and/or Aspen.
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#2 Article in Information week: Mainframe Programmers Wanted
https://www.garlic.com/~lynn/2005q.html#26 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006b.html#24 Seeking Info on XDS Sigma 7 APL

Old PCs--environmental hazard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old PCs--environmental hazard
Newsgroups: alt.folklore.computers
Date: Mon, 10 Apr 2006 10:14:06 -0600
Brian Boutel writes:
Some organisations require clean desks every night for all employees, sharing or not. This is for security. You can't trust the cleaners.

you may not be able to trust the cleaners ... but i thot clean desks were for the security officers that came around late at night checking to see if you had complied with all classified information being under lock. since they had to leaf thru all the paper that had been left out ... their job was much faster if there was nothing left out.

for classified trash there were 50gal barrels, top lid was secured with padlock and had specially constructed slot. I don't think they were directly emptied, instead they were periodically collected with empty barrel replacing the collected barrel.

we had "internal use only" ... which was ok to leave out, the printed corporate telephone book was "internal use only". then was simple corporate confidential, which had to be kept under locked up at night. I once accidentally carried a document labeled corporate confidential into a gov. facility. the marine guards caught it on the search leaving and maintained that they had no way of knowing whether it was a gov. confidential document or a corporate confidential document (the corporate name possibly being just a gov. confidential classification term). It took quite a while to get the document liberated.

then there was "confidential - restricted" ... basically need to know only. in the 70s, internal copies of the cern report to share comparing cms and tso got stamped "confidential - restricted" ... there was an objective of restricting employee knowledge about how bad tso was compared to cms.

the highest level were the candy striped "registered confidential" documents ... each page was embossed with large faint document serial number in the background (numbers were couple inches in size), aka there was a document publication number ... but in addition each copy of the document had a unique document serial number embossed on each page. these had to be kept in special double-locked cabinets. you had to sign for each serial numbered document and there were sporadic security audits where you had to show that each document signed out to you was still in your possession (and kept under lock and key). at one point I had a cabinet full of 811 documents, which was code name (taken from nov. 1978 inception) for 370-xa, 31bit virtual addressing, some of the stuff that showed in 3081 processors.

misc. past posts mentioning "confidential - restricted" and/or "registered confidential"
https://www.garlic.com/~lynn/98.html#28 Drive letters
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#20 mainframe question
https://www.garlic.com/~lynn/2001n.html#37 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#14 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#51 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002n.html#37 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003k.html#13 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#21 TSO alternative
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004n.html#17 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005f.html#42 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#49 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005s.html#26 IEH/IEB/... names?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Over my head in a JES exit

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Over my head in a JES exit
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
CC: ibmmain <ibm-main@bama.ua.edu>
Date: Mon, 10 Apr 2006 11:43:30 -0600
Anne & Lynn Wheeler wrote:
One of the internal problems with the JES2 NJE network was it traditional networking paradigm and severe limitations. Quite a bit of the original code still carried the "TUCC" label out in cols. 68-71 of the source. It had intermixed networking related fields in with all the

ref:
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit

another problem JES2 had internally was its use of source maintenance.

both cp67 (and later vm370) and hasp shipped with full source. A lot of customers commoningly rebuilt their systems from scratch ... re-assembling everything from source.

internally, the jes2 group had adopted the source maintenance infrastructure from cp/cms ... and were doing almost all their development on cp/cms platform. actually a large number of product groups did majority of their development on cp/cms platform ... even a large number of the mvs-based products. however, jes2 was one of the few mvs-based products that exclusively used the cp/cms source maintenance infrastructure (possibly in part because they had shared a common heritage of shipping all source and customers being used to rebuilding systems from scratch using the source).

however, this created some amount of hassle for jes2 development group ... because they then had to convert all their cp/cms based stuff to mvs infrastructure for product ship.

misc. past posts describing evolution of cp/cms source maintenance infrastructure
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003j.html#14 A Dark Day
https://www.garlic.com/~lynn/2003j.html#45 Hand cranking telephones
https://www.garlic.com/~lynn/2003k.html#46 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004m.html#30 Shipwrecks
https://www.garlic.com/~lynn/2005b.html#58 History of performance counters
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005r.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#6 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006b.html#10 IBM 3090/VM Humor

A very basic question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A very basic question
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 10 Apr 2006 12:32:57 -0600
Ted MacNEIL wrote:
Also, for the poster that asked about CPU usage. Who cares? This entire CICS sub-system is using less than 5% of the processor. The only one being impacted is this sub-system. CICS cannot sustain that rate very long without response implications.

We need to know the cost per I/O to size the work effort to repair or remove.


the science center had done a whole lot of work on performance ...
https://www.garlic.com/~lynn/subtopic.html#545tech

lots of work on instrumenting much of the kernel activity and collecting the statistics on regular intervals (every ten minutes, 7x24 ... eventually with historical data that spanned decades).

people at the science center also did a lot of even modeling for characterizing system operation

and did the apl analytical system model that eventually turned into the performance predictor on HONE.

HONE was an online (cp/cms based) time-sharing system that supported all the field, sales and marketing people in the world (major replicas of HONE were positioned all around the world).
https://www.garlic.com/~lynn/subtopic.html#hone

eventually all system orders had to be processed by some HONE application or another. the HONE performance predictor was a "what-if" type application for sales ... you entered some amount of information about customer workload and configuration (typically in machine readable form) and asked "what-if" questions about changes in workload or configuration (type of stuff that is normally done by spreadsheet applications these days).

i used variation off the performance predictor was also used for selecting next configuration/workload in the automated benchmark process that I used for calibrating/validating the resource manager before product ship (one sequence involved 2000 benchmarks that took 3 months elapsed to run)
https://www.garlic.com/~lynn/submain.html#bench

the performance monitoring, tuning, profiling and management work eventually evolved also into capacity planning.

there also were some number of instruction address sampling tools ... to approx. where processes were spending much of the time. One such tool was done in support of deciding what kernel functions to drop into microcode for endicott's microcode assists. the guy in palo alto that had done the apl microcode assist originally for the 370/145 ... did a psw instruction address sampling application in 145 microcode and we used it to help select what part of kernel pathlengths to drop into microcode for kernel performance assist. minor reference
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

However, another technology we used was to take interval activity data and run it thru multiple regression analysis ... assuming we accounted for the major activity variables, we were able to come up with pretty accurate cpu time correlation with different activities. Doesn't take a whole lot of effort ... just a far amount of recorded activity over range of activity and feed it into a good MRA routine. At the time, we primarily used MRA from the scientific subroutine library. There are some free packages you can find on the web ... but to get one that can handle both a large number of variables and large number of data points ... you may have to consider a commercial package.

In subsequent years, I've used MRA to find anomalous correlations in extremely large applications ... unusual activity correlations that conventional wisdom maintained couldn't possible be correct ... but actually accounted for major resource consumption. This is analogous to one of the other postings in this thread about finding things happening in a large application; that the responsible application people couldn't possibly believe was happening.

trivial things MRA may turn up is cpu use and i/o activity by major functions, cpu use per i/o, etc.

Old PCs--environmental hazard

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old PCs--environmental hazard
Newsgroups: alt.folklore.computers
Date: Mon, 10 Apr 2006 13:37:27 -0600
"Rostyslaw J. Lewyckyj" writes:
As I remember it on the 360 ... OS MVS ..., there were/are two types, levels, of IPL. There is the cold start IPL and a warm start IPL For either system you set in the address of the device from which the system is to be loaded. The hardware then reads in the IPL bootstrap record from the device and starts executing it. In a cold start IPL you lose all previous information about the former state of the system, such as the spool pack, job queues, etc. In a warm start you don't.

there were generally two different kinds of things referred to as IPL "warm start". one tried to scavanage old data laying around in memory at the time of the reboot and save it as part of restarting. the other was slightly analogous to fsck in unix ... although significantly faster and involving significantly less information.

when recoverable information needed by either process was too messed up, you fell back to a cold start.

search engine is your friend ... here is more detailed description of ipl, nip, and hasp with os/360 (predating hasp being renamed jes2 which managed the spool and the job queue; and os/360 being renamed mvs).
http://hansen-family.com/mvs/IPL_Process.htm

cold/warm was when you had gotten to the hasp initialization phase of system startup and had to reply warm/cold.

a little more overview:
http://hansen-family.com/mvs/MVS.htm

various past postings mentioning hasp
https://www.garlic.com/~lynn/submain.html#hasp

it took quite awhile before os/360 got alternate console support. I remember having the whole system to my self on weekens and the system coming to a griding halt and ringing the bell. nothing would work. even rebooting/reipl just resulted in the bell ringing. i would eventually bang on the 1052-7 operator's console in frustration and the fan-fold paper would fall out the back of the 1052-7 ... and it was obvious the system was waiting for new paper to be feed into the console.

the 1052-7 had a little finger sensor for when it ran out of paper ... and it would present "unit check", "intervention required" i/o error in response attempting to write to the console. the system then would ring the bell and wait until the operator had feed a new stack of paper into the operator's console.

it turns out that it was possible for the end of the paper feed to move past the finger sensor ... but it was sometimes positioned just so that the paper didn't actually finish falling out. from the front of the operator's console it still looked like it had good supply of paper. it wasn't until you banged in frustration on the keyboard that it jiggled the mechanism and the paper fell out (and then it was obvious what the problem was).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Over my head in a JES exit

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Over my head in a JES exit
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 10 Apr 2006 14:04:05 -0600
Brian Inglis writes:
... more importantly, can you see the cover of your HASP songbook? ... on Friday morning?

I've got an old HASP songbook someplace (and even several reproductions of the 1931 edition of the ibm song book).

scids typically closed at midnight ... so that wasn't hard. however, back in the days of the free bar at scids ... share will have pre-purchased/order some number of cases of alcoholic beverages. thursday night after scids, the un-consumed beverages would be taken to the (share) president's suite. some number of people had an obligation to help with minimizing the amount of non-empty alcoholic beverage containers that the president had to deal with on friday.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Over my head in a JES exit

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Over my head in a JES exit
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 10 Apr 2006 16:56:49 -0600
Brian Inglis writes:
Nobody created a PDS (module library) creator (or transporter) similar to (Unix) tar, so they could just NJE the files over to MVS? IIRC tar format is basically a dump of inode metadata and file data.

it wasn't filesystem metadata issue (files easily moved back and forth) ... it was that the structure of the multi-level cms source maintenence metadata was different than the mvs product/component management metadata. the mvs fixes, enhancement, change maintenance metadata was totally different (and there wasn't necessarily a straight-foward automated conversion from one to the other).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old PCs--environmental hazard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old PCs--environmental hazard
Newsgroups: alt.folklore.computers
Date: Mon, 10 Apr 2006 17:20:15 -0600
"Rostyslaw J. Lewyckyj" writes:
It was also the place where the version of Unix that was later taken over by Amdahl was first developed.

there was a concerted effort to try and get approval to make a job offer to the person (that did most of the work) after he had graduated ... but wasn't successful, and so he went w/Amdahl.

as mentioned in some of the past postings referencing aspen ... there was some amount of rivalry between aspen and gold (aka A.U.) projects. I suggested to some of them that they might somewhat reconcile by looking at doing something akin to the tss/370 ssup work for AT&T ... with a stripped down page mapped supervisor underneath higher level unix code.

If nothing else, there were several in the valley that would show up to the monthly meetings at SLAC and then adjorn to the blue goose or one of the other watering holes afterwards ... where we would kibitz/gosip about such things.

couple recent posts mentioning the tss/370 unix stuff for AT&T:
https://www.garlic.com/~lynn/2006e.html#31 MCTS
https://www.garlic.com/~lynn/2006e.html#33 MCTS

the most recent post with references to Aspen (it didn't mention the gold activity ... but it has a list of URLs, some of which mention both aspen and gold):
https://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit

misc. past posts with reference to the TUCC label in HASP networking source:
https://www.garlic.com/~lynn/2001g.html#48 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#12 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#23 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#48 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004g.html#8 network history
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2005f.html#12 Mozilla v Firefox
https://www.garlic.com/~lynn/2005f.html#53 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005i.html#37 Secure FTP on the Mainframe
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#0 HASP/ASP JES/JES2/JES3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old PCs--environmental hazard

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old PCs--environmental hazard
Newsgroups: alt.folklore.computers
Date: Mon, 10 Apr 2006 17:47:49 -0600
"Rostyslaw J. Lewyckyj" writes:
For some reason I thought/remembered that the person involved was 'S.B' and he went to Bell Telephone Labs, got interested in security and is still there and also involved in SANS. Also the system name was UTS?

I remember effort to try and hire at least one person that was involved in the effort. at Amdahl, the development project was code named gold (after initials for "Amdahl unix"). the product was announced as UTS.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Old PCs--environmental hazard

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Old PCs--environmental hazard
Newsgroups: alt.folklore.computers
Date: Mon, 10 Apr 2006 18:02:30 -0600
an old email only partially related (note gold wasn't single user as speculated in the attached)

Date: 04/04/80 10:48:34
To: wheeler
From: somewhere on the east coast

Lynn,

XXX and I were involved to some extent with the TSS thing, and to wishful thinking about modifying CP in the way that they wanted to modify TSS. We ahve heard of the Amdahl UNIX stuff. The former Princeton student did a conversion of UNIX while at Princeton to run under VM, but it really ran as if it were in a bare machine. The TSS thing was very different, and so was the proposed CP changes...to provide all sort of functions for the lower levels of UNIX supervisor to do task forking, etc. The Princeton thing never got to a practical working state, but was obviously feasible.

It could be that the version done at Amdahl by the former Princeton student also runs in a virtual machine, and supports multiple users in a virtual machine. Perhaps the Amdahl hypervisor is all that it runs under? Maybe that UNIX takes advantage of the various diagnoses of CP? Another possiblity is that further work was done to make the UNIX single user in nature. This is not too hard to do, since the Bell Sys. Tel. J. had an article about a single user LSX11 version of UNIX.


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 11:03:19 -0600
Ken Johanson writes:
Moreover, x509 certs (and constructed CAs) instantly lend themselves to a whole host of other secure AND trust applications, https, ldaps, smtps, ftps, imaps (to name a few), mail signing / encryption, software signing. When you extend their reach out to all these other forms of communication, and used by computer laymen, old-fashioned random public key strings is simply not at all feasibile.

Anyone who claims that x509 is disadvantaged compared to plain PKI, is simply demonstrating that they have no practical or level experience with BOTH technologies. That really shows like sore thumb to those of us who do have both exeriences.


one of the issues from x.509 identity certificates from the early 90s was the eventual realization that certificates potentially grossly overloaded with personal information represented significant privacy and liability issues.

as a result, in the mid-90s you saw the introduction of relying-party-only certificates ...
https://www.garlic.com/~lynn/subpubkey.html#rpo

where the only unique information in the certificate was some sort of account number (that could be used to index a respository where the actual entity information resided, including the original issued public key information, along with that public key) and a public key.

an entity created some sort of message or transaction, digitally signed the transaction and appended the RPO-certificate. It was trivial to show for these scenarios that the appended certificate was redundant and superfluous.

I got a lot of flack in the mid-90s for demonstrating that appending such certificate was redundant and superfluous. The conventional wisdom at the time was that appendidng certificates was required for everything and if you voiced any sort of opposing view, you were a heretic doing the work of the devil. One of the lines was about appending certificates to retail payment transactions would bring the business process into the modern era. My counter-argument was that the purpose of appending certificates to payment transactions was to provide sufficient information for offline authorization ... and, as such was actually a return to the 50s era, predating online transactions.

This is the business process analysis of appended certificates following the offline credential model or the letters of introduction/credit from the sailing ship days (dating back at least several centuries). The credential/certificate paradigm is useful in offline scenarios where the relying party.

1) doesn't have their own information about the other party

2) doesn't have any online access to trusted authorities for information about the other party

3) must rely on prepackaged, stale, static credential/certificate about the other party rather than having online access to realtime information

when we were called in to work with small client/server startup that wanted to do payments on their server and had this technology called SSL ...
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we had to work out the business process details of using SSL technology for payments ... as well as doing walkthru audits of these emerging operations calling themselves certification authorities.

for the original payment gateway, for using SSL between the server and the payment gateway (for payment transactions), one of the first things we had to do was specify something called mutual authentication (which hadn't yet been created for SSL). However, as we flushed out the business process of servers interacting with the payment gateway, we were keeping tables of authorized merchants and the gateway and gateway specificate information loaded at each merchant. By the time we were done with the process, it was evident that any certificate use was also totally redundent and superfluous (purely a side-effect of using the crypto function that were part of the SSL library).

The payment gateway had registered information (including public keys) of all authorized merchants and all authorized merchants had configuration information regarding the payment gateway (including its public key). There was absolutely no incremental business process advantage of appending digital certificates to any of the transmissions.

The other part of stale, static, redundant and superfluous certificates, at least related to payment transactions was that a typical retail payment transaction is on the order of 60-80 bytes. The RPO-certificate overhead from the mid-90s was on the order of 4k-12k bytes (i.e. appending stale, static, redundant and superfluous digital certificates to payment transaction was causing a two orders of magnitude payload bloat). Somewhat as a result, there was an effort started in the financial standards organization for "compressed" digital certificates ... with a goal of reducing the 4k-12k byte overhead down into the 300 byte range. Part of their effort was looking at eliminating all non-unique information from a RPO-certificate issued by a financial institution (CPS, misc. other stuff). I raised the issue that a perfectly valid compression technique was to eliminate all information already in possesion of the relying party. Since I could show that the relying party already had all possible information in the digital certificate, it was possible to reduce the digital certificates to zero bytes. Rather than claiming it was redundant and superfluous to attack stale, static digital certificates to a transaction, it was possible to show that it was possible to attach zero-byte digital certificates to every transaction (for a significant reduction in the enormous payload bloat caused by digital certificates).

Another model for certificates ... with respect to thhe emulation of credential/certificates in the offline world (where the relying party had no other mechanism for establishing information in first time communcation with total stranger), was the offline email model from the early 80s. The relying party would dialup their local (electronic) post office, exchange email, and hangup. They potentially then had to deal with frist time email from a total stranger and no way of establishing any information about the sender.

During the early establishment of electronic commerce, I had frequent opportunity in exchanges to refer to the merchant digital certificates as comfort certificates.

Stale, static digital certificates are a tangible representation of a certification business process. In the offline world, certificates and credentials were used as a representation of such certification business processes for the relying parties who had no direct access or knowledge regarding the original certification process. These stale, static certificates and credentials provided a sense of comfort to the relying parties that some certification process had actually occurred.

One of the things that appears to have happened with regard to physical certificates and credentials ... is that they seemed to have taken on some mystical properties totally independent of the certification process that they were created to represent. The existance of a certificate or credential may convey mystical comfort ... even though they simply are a stale, static representation of the certification process ... for relying parties that have no other means of acquiring information about the party they are dealing with.

Another example of such credentials/certificates taking on mystical comfort properties are the diploma mills ... where the pieces of parchment produced by diploma mills seem to take on attributes totally independent of any educational attainment that the parchment is suppose to represent.

An issue, is that in transition to online paradigm, it is possible for relying parties to have direct online real-time access to the certified information ... making having to rely on the stale, static representations using credentials and certificates, redundant and superfluous.

However, there is an enormous paradigm change from an offline-based orientation (where a large percentage of people may still draw great comfort from artificial constructs that emulate the offline credential/certificate paradigm) to an online-based paradigm dealing with realtime information and realtime business processes.

One of the big PKI marketing issues has been "trust". However, the actual trust has to be in the certification process ... where any certificate is purely a stale, static representation of that certification process for use by relying parties that have no other means of directly accessing the information.

As the world has migrated to more and more online ... that is somewhat pushing the X.509 and PKI operations into market segments that have no online mechanisms and/or can't justify the costs associated with online operation. However, as online becomes more and more ubiquitous and online costs continue to decline, that market segment for x.509 and PKI operations is rapidly shrinking (needing stale static certificates as representation of some certification process for relying parties that otherwise don't have direct access to the information). Also part of the problem of moving into the no-value market segment, is that it becomes difficult to justify any revenue flow as part of doing no-value operations.

a slightly related commented on some of the PKI operations attempting to misuse constructs with very specific meaning
https://www.financialcryptography.com/mt/archives/000694.html

slightly related set of posts on having worked on word smithing both the cal. state and federal electronic signature legislation
https://www.garlic.com/~lynn/subpubkey.html#signature

a collection of earlier posts in this thread:
https://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#12 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#13 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#16 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#34 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#37 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#39 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#13 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#14 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#16 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#17 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#18 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#27 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#29 X.509 and ssh

misc. past posts mentioning the enormous benefits of zero byte certificates
https://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech6 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss1 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm3.htm#kiss6 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm4.htm#6 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
https://www.garlic.com/~lynn/aadsm5.htm#x959 X9.59 Electronic Payment Standard
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#spki2 Simple PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#softpki23 Software for PKI
https://www.garlic.com/~lynn/aadsm12.htm#28 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#64 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aadsm13.htm#20 surrogate/agent addenda (long)
https://www.garlic.com/~lynn/aadsm14.htm#30 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aadsm14.htm#41 certificates & the alternative view
https://www.garlic.com/~lynn/aadsm15.htm#33 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm20.htm#11 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm22.htm#4 GP4.3 - Growth and Fraud - Case #3 - Phishing

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A very basic question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A very basic question
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 11 Apr 2006 13:42:09 -0600
Anne & Lynn Wheeler wrote:
eventually all system orders had to be processed by some HONE application or another. the performance predictor was a "what-if" type application for sales ... you entered some amount of information about customer workload and configuration (typically in machine readable form) and asked "what-if" questions about changes in workload or configuration (type of stuff that is normally done by spreadsheet applications these days).

i used variation off the performance predictor was also used for selecting next configuration/workload in the automated benchmark process that I used for calibrating/validating the resource manager before product ship (one sequence involved 2000 benchmarks that took 3 months elapsed to run)
https://www.garlic.com/~lynn/submain.html#bench

the performance monitoring, tuning, and management (along with workload profiling) work evolved into capacity planning.


ref:
https://www.garlic.com/~lynn/2006f.html#22 A very basic question

for a little more drift ... when the US hone datacenters were consolidated into a single center in the bayarea in the late 70s ... possibly the largest single-system cluster anywhere (at the time) was created. a variation of the performance predictor was used to monitor real time activity of all the members of the cluster and when a new branch office (or other sales, marketing or field) person initiated a login, the overall view of the cluster was used to decide which machine the user actually logged into (aka providing both availability and load-balancing at log-in)
https://www.garlic.com/~lynn/subtopic.html#hone

In the 70s, the base system didn't have process migration between different processors in a cluster (once logged on).

Note, however, one of the time-sharing service bureaus using the same (cp/cms) platform, had done such a enhancement in the mid-70s ...
https://www.garlic.com/~lynn/submain.html#timeshare

this particular service bureau had moved to 7x24 operation with customers world-wide ... and so there was no period where the system was totally idle and could be taken down for service. When a member of their (mainframe) cluster configuration needed to be taken offline for service or preventive maintenance ... it was possible to migrate all active processes off the piece of hardware (to other members in the cluster, transparently to the end user), and take the hardware offline.

they had datacenter near boston and another in san francisco and claimed to be able to do transparent process migration between datacenters (as well as between members of the cluster within a datacenter) ... modulo having replicated data at the two centers.

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 14:12:46 -0600
Ken Johanson writes:
"potentially" seem really vague to me the way you use it. The fact is, very few fields are actually required by CAs -- and the ones that are required are the ones needed to establish the identity being claimed (given name, location, email) (to distinguish one John Doe from another), and also the endorser cert / chain.

ref:
https://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh

the issue in the early 90s with x.509 identity certificate standard ... appeared to be that personal information in a certificate seemed to be a valuable business niche for certificates ... and some of the emerging things called certification authorities ... which were out to sell these x.509 identity certificates started down the road that the more personal information included, the greater the value of the certificate (and the implication that they could charge more).

that trend somewhat created the realization in a number of institutions, that certificates, grossly overeloaded with personal information, gave rise to significant privacy and liability issues ... and contributed to the appearance of the relying-party-only certificates in the mid-90s.
https://www.garlic.com/~lynn/subpubkey.html#rpo

the original x.509 identity certificate paradigm seem to assume a total anarchy where the x.509 identity certificate would be used for initial introduction (like the letters of credit/introduction example) between totally random strangers. as such, the respective entities as individual relying parties, had no prior information about the other party. In such a wide-open environment, it seemed necessary to pack as much information as possible into the certificate.

however, the retrenchment to the relying-party-only model, placed all the actual entity information in a repository someplace with just an account number for indexing an entity's repository entry. however, it was then trivial to demonstrate that attaching relying-party-only digital certificates to signed operations (sent off to the relying party) was redundant and superfluous .. since the relying party would have access to all the requiered information and didn't require a digital certificate to represent that information.

the other approach was to trivially demonstrate that there was no information in such certificates that wasn't also contained in the corresponding repository certified entry. in which case, if it could be shown that the relying party had access to the repository certified entry, then it was trivially possible to eliminate all fields in the certificate resulting in a zero-byte digital certificate ... which could be freely attached to everything.

in any case, the trivial case of a relying-party-only certificate is where the institution that is acting as the relying party is also the institution responsible for certifying the information and issuing a digital certificate (representing that certifying process).

in an online environment ... replying-party-only model was extended, so that if the information could be accessed by other parties (that weren't directly responsible for the certification and issuing a digital certificate, representing such certification) using purely online realtime transactions, then the use of stale, static certificate as a representation of some certified information (in some repository) becomes redundant and superfluous (when the entities have direct and/or online, realtime access to the original information).

the shrinking market for the offline credential/certificate market (and any electronic analogs) is market segment where it isn't possible to have online access to the real information and/or the value of the operation can't justify (the rapidly declining) online access costs (aka certificate moving further and further into the no-value market segment).

part of this is the comfort that people have in the (physical) offline credential/certificate model ... nog being use to operating in an online environment ... and therefor not having direct, realtime access to the actual information. The physical offline credential/certificate provides them with a sense of comfort regarding some certification process as actually having taken place (and which they have no means of directly validating). They then have achieve similar comfort to find an electronic certificate analog to these offline credential/certificate constructs. However, it is normally trivial to show that realtime, online direct access to the certified information is at least as valuable ... and frequently significantly more valuable than relying on a stale, static digital certificate (modulo the scenario for which credentials/certificates were originally invented, the place where the relying pacrty doesn't have access to the actual information).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 14:58:11 -0600
Ken Johanson wrote:
Let me write you 100 $100 checks... will you cash them all and hands me the goods based on my self-generated, unvouched public key? Really?

ref:
https://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#31 X.509 and ssh

this was the physical world scenario from the 50s ... by the 60s you were starting to see (at least) business countermeasure to this scenario in the offline market, where business checks had a maximum value limit printed on the check (i.e. the check wasn't good if the individual wrote it for the limit).

the embezzler countermeasure was to create a 100 checks for $100 each ... in order to get the $10,000 (or 200 checks of $5000 each for $1m).

the issue was trying to limited the authority of any one individual. an individual might have a $1000 total budget ... but trying to control it ... they would provide the individual with checks, no one such check could exceed $100. The actual problem was to try and keep the individual within their budget. The problem with the offline model was that individual, single (even authenticated) transactions didn't aggregate.

This is where you got the online credit card model in the 70s. The consumer would do a transaction with the merchant ... and the merchant would forward the transaction to the responsible (certifying authority) institution for authentication and authorization. The merchant then got back a declined or approved response ... indicating the transaction had both been authenticated AND authorized (which was significantly more valuable to the merchant than just authenticated by itself).

Because of the various vulnerabilities and exploits in the offline credential/certificate model ... you saw businesses moving to online business cards sometimes in the 80s ... but definitely by the 90s. Instead of an individual being given a stack of checks, they were given a corporate payment card. The corporate payment card had online business rules associated with it for authorizing financial transactions (in addition to authentication). The trivial business rule was whether the transaction ran over the aggregated budget (i.e. the individual could do any combination of transactions they wanted ... as long as they didn't exceed some aggregated limit ... something that is impossible to do with the offline, individual operation at a time, credential/certificate paradigm).

One they got the aggregate budget/limit under control ... then they could also add other kinds of real-time transaction rules ... use only at specific categories of merchants, use only at specific set of merchants, use only for specific SKU codes, etc) ... the online paradigm not only provides the realtime aggregation function (not possible with the old-fashion, offline certificate/credential paradigm) as well as a variety of much more sophisticated rules (which can be dynamically change by time or other characteristic).

What you have is the issuing financial institution as the registration authority and certifying authority. The financial institution performs the public key registration (typically defined as RA functions in the traditional Certification Authority paradigm) and then certifies the information. However, instead of actually issuing a certificate ... the institution specifies that it is only in support of online, realtime transactions (since there are numerous kinds of threats, exploits, and vulnerabilities that have been eliminated that you typically run into when you are dealing with an offline paradigm ... like inability to handle aggregated transactions like the 100 $100 check scenario that I've repeatedly used a number of times). The individual digitally signs their individual transactions that is sent to the merchant ... as in the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

it is not necessary to attach a digital certificate since it is required that the merchant send it off to the financial institution (certification authority) for both authentication (with the onfile public key) as well as authorization (does it meet all the business rules, including realtime business rule consideration). Since the financial institution has the onfile, registered public key for verifying the digital signature, it is redundant and superfluous to require the attachment of any digital certificate (or at least any attach digital certicate with non-zero payload actually carrying any real information)

one of the requirements given the x9a10 working group for the x9.59 financial standard was to preserve the integrity of the financial infrastructure for all retail payments.

A recent post about various kinds of financial transaction threats if forced to fall-back to an offline, credential/certificate operation
https://www.garlic.com/~lynn/aadsm22.htm#40 FraudWatch - Chip&Pin, a new tenner (USD10)

a few misc. past posts showing crooks getting around any per check business limit by going to multiple checks (as in your 100 $100 check example) ... and the business world countering with real-time, online aggregated transaction operation (making the offline credential/certificate operation redundant and superfluous).
https://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
https://www.garlic.com/~lynn/aadsm5.htm#spki4 Simple PKI
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm7.htm#auth Who or what to authenticate?
https://www.garlic.com/~lynn/aadsm9.htm#cfppki8 CFP: PKI research workshop
https://www.garlic.com/~lynn/aepay6.htm#gaopki4 GAO: Government faces obstacles in PKI security adoption
https://www.garlic.com/~lynn/aepay10.htm#37 landscape & p-cards
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/aadsm10.htm#limit Q: Where should do I put a max amount in a X.509v3 certificat e?
https://www.garlic.com/~lynn/aadsm10.htm#limit2 Q: Where should do I put a max amount in a X.509v3 certificate?
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsm11.htm#40 ALARMED ... Only Mostly Dead ... RIP PKI ... part II
https://www.garlic.com/~lynn/aadsm12.htm#20 draft-ietf-pkix-warranty-ext-01
https://www.garlic.com/~lynn/aadsm12.htm#31 The Bank-model Was: Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#32 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/2000.html#37 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2001c.html#8 Server authentication
https://www.garlic.com/~lynn/2001g.html#21 Root certificates

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 15:36:32 -0600
Ken Johanson wrote:
You are implying that a cert must be attached for message. That is not true. SSL establishes sessions, where in the cert / keys are exchanged only at the beginning of the session. There is no "KB's worth of superfluous bytes" as you claim below, and certainly none of those are wasted in the case where identity must be assured.

ref:
https://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#31 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#31 X.509 and ssh

there tends to be a broad range of different applications. supposedly one of the big drivers for certificate business in the mid-90s was in conjunction with financial transactions. the example was a financial transaction that is typically requiring 60-80 bytes ... and a large contingent of the PKI industry behind appending certificates to all such transactions (as part of financial institutions paying $100 for digital certificates for everybody in the country that might do a financial transactions).

it was possible to trivially show

1) that if the financial institution already had all the information, then it was redundant and superfluous to attach a digital certificate to such financial transactions

2) that the typical financial transaction was 60-80 bytes and the payload overhead of having redundant and superfluous certificates was on the order of 4k-12k bytes (representing a payload bloat of a two orders of magnitude increase).

So in SSL, the overhead is theoretically front load for a session. However, when we originally helping to patch together the actual business process use of SSL
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

a lot of the SSL HTTP transactions were individual transactions (not KB transactions).

the original business justification for SSL digital certificates were related to e-commerce ... and mentioned in the above references. The process was that the merchant got their domain name in a certificate. As part of the TCP/HTTP/SSL session setup, the person had typed in the URL/domain name into the browser, the browser had contacted DNS to map that domain name to IP-address, the browser contacted the webserver, the webserver responded with certificate, the browser validated the certificate and then check the typed in domain name against the domain name in the certificate.

however, the majority of the merchants operating webservers quickly discovered that the SSL overhead increased overhead by a factor of five times compared to running straight HTTP. As a result ... you saw almost all merchants dropping back to using SSL for purely the payment/checkout phase of electronic commerce.

The problem now is that the URL/domainname that is typed in by the user is no longer checked against an SSL certificate. What now happens is that the user clicks on a button supplied by the webserver that generates the URL and domain name ... which is then used to check against the domain name in the certificate also supplied by the same webserver. The issue now is that the majority of the ecommerce use in the world now violates fundamental kindergarten security principles ... since the webserver provides both the URL and the certificate that are checked against each other. It would take a really dumb crook to not have a completely valid certificate that corresponds to the domain that they provide in the button.

So the other issue was that HTTP (which SSL was a subset) had other these truelly trivial transaction payloads ... and was using TCP protocol for doing it. Now TCP has a minimum of seven packet exchange for session setup (ignoring any added HTTP and/or additional SSL overhead). One of the characteristics in these early days was that most TCP implementations assumed few, long running sessions .... as opposed to a very large number of extremely short transaction like operations. One of the places that this appeared was in the linear list operation supporting FINWAIT. There was a crisis period as HTTP (& SSL) caught on where numerous major webservers found that they were spending 99percent of total cpu managing the FINWAIT list.

lots and lots of posts about SSL and certificates from the period
https://www.garlic.com/~lynn/subpubkey.html#sslcert

there is another issue somewhat involving the weak binding between domain name and domain name owner. the issue is that many of the certification authorities aren't the authoritative agency for the (SSL domain name server certificate) information they are certifying. much of the original justification for SSL related to mitm attacks was various integrity issues in the domain name infrastructure.

the process tends to be that a domain name owner registers some amount of identification information for their domain name ownership with the domain name infrastructure. the certification authorities then require that SSL domain name certificate applicants also provide some amount of identification information. then the certification authorities attempt to do the expensive, time-consuming, and error-prone process of matching the supplied identification information for the SSL domain name certificate with the identificaiton information on file with the domain name infrastructure for the domain name.

as part of various integrity issues related to that process, there has been a proposal, somewhat backed by the ssl domain name certification authority industry that domain name owners also register a public key with the domain name infrastructure (in addition to identificaiton information). then future communcation can be digitally signed and verified with the onfile public key. also the ssl domain name certification authority industry can require that ssl domain name certificate applications be digitally signed. then the certification authority can replace the expensive, time-consuming, and error-prone identification matching process with a much less-expensive and efficient authentication process by doing a real-time retrieval of the on-file publickey from the domain name infrastructure for verifying the digital signature (in lieu of doing a real-time retrieval of the on-file identificaiton information for the expensive, time-consuming and error-prone identification matching).

the two catch-22 issues here are

1) improving the overall integrity issues of the domain name infrastructure lessons the original justification for ssl domain name certificates

2) if the certification authority industry can rely on real-time retrieval of publickeys from the domain name infrastructure as the base, TRUST ROOT for all of their operations ... it is possible that other people in the world might also be able to do real-time retrieval of publickeys as a substitute to relying on SSL domain name certificates

so then one could imagine a real transaction oriented SSL ... where the browser takes the URL domain name and requests the domain name to ip-address mapping. In the same transaction the domain name infrastructure also piggybacks in the same transaction, any registered public key for that webserver.

Now you can imagine that in the initial communication, the browser includes the actual request, encrypted with a randomly generated secret key, which is, in-turn encrypted with the onfile public key obtained from the domain name infrastructure. This is all packaged in single transmission set of to the webserver. If it becomes an issue, the various SSL options can also be registered with the domain name infrastructure (as part of registering the public key). This can be a true SSL transaction and eliminates all the existing SSL protocol chatter back and forth (and no digital certificate required)

so, if you start considering transaction efficiency ... there is the issue of 7packet minimum for TCP sessions. VMTP specified a 5-packet minimum exchange for reliable communication.

From my RFC index.
https://www.garlic.com/~lynn/rfcietff.htm
https://www.garlic.com/~lynn/rfcidx3.htm#1045
1045 E
VMTP: Versatile Message Transaction Protocol: Protocol specification, Cheriton D., 1988/02/01 (123pp) (.txt=264928) (VMTP)

as usual in my RFC index summary entries, clicking on the ".txt=nnn" field retrieves the actual RFC.

However, when we were doing XTP protocol specification ... we actually came up with a minimum 3-packet exchange for reliable transaction
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

lots of other posts mentioning the catch-22 issue for the SSL domain name certification authority industry moving to higher integrity domain name infrastructure operation
https://www.garlic.com/~lynn/aadsmore.htm#client3 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsmore.htm#pkiart2 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#5 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#cfppki5 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm13.htm#26 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm15.htm#28 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm17.htm#18 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#43 SSL/TLS passive sniffing
https://www.garlic.com/~lynn/aadsm19.htm#13 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm19.htm#42 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm20.htm#43 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm22.htm#0 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#4 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#18 "doing the CA statement shuffle" and other dances
https://www.garlic.com/~lynn/aadsm22.htm#19 "doing the CA statement shuffle" and other dances
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2001l.html#22 Web of Trust
https://www.garlic.com/~lynn/2001m.html#37 CA Certificate Built Into Browser Confuse Me
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002j.html#59 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002m.html#30 Root certificate definition
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2002o.html#10 Are ssl certificates all equally secure?
https://www.garlic.com/~lynn/2002p.html#9 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#29 SSL questions
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface
https://www.garlic.com/~lynn/2003f.html#25 New RFC 3514 addresses malicious network traffic
https://www.garlic.com/~lynn/2003l.html#36 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003p.html#20 Dumb anti-MITM hacks / CAPTCHA application
https://www.garlic.com/~lynn/2004b.html#41 SSL certificates
https://www.garlic.com/~lynn/2004g.html#6 Adding Certificates
https://www.garlic.com/~lynn/2004h.html#58 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#5 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#35 Do I need a certificat?
https://www.garlic.com/~lynn/2005e.html#22 PKI: the end
https://www.garlic.com/~lynn/2005e.html#45 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005e.html#51 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005g.html#0 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#1 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#9 What is a Certificate?
https://www.garlic.com/~lynn/2005h.html#27 How do you get the chain of certificates & public keys securely
https://www.garlic.com/~lynn/2005i.html#0 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005k.html#60 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005m.html#0 simple question about certificate chains
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2005t.html#34 RSA SecurID product
https://www.garlic.com/~lynn/2005u.html#9 PGP Lame question
https://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#29 Caller ID "spoofing"

X.509 and ssh

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 17:06:29 -0600
Ken Johanson wrote:
Anne and/or Lynn have recited so many perceived "drawbacks" to x509 in general, even in light of their usage in just about every modern security / identity communication system of late. But what *constructive* suggestions to improve the supposed deficiencies, have they offered? What specific technology are they claiming is a good/better replacement (some patent-pending technology of theirs perhaps?)? And why do they so repeatedly recite "used to be" limitations, but not also mention (or show awareness of) the current state of the art fully address those?

so there is some complete agreement with the SSL domain name Certification Authority industry proposal to have domain name owners register onfile public keys when they register domain names.

rather than just restricting the use of realtime, online, onfile public key use to just validating digital signatures on communication with the domain name infrastructure and for ssl domain name digital certificate applications ... enable access to the online, onfile public keys available to everyone in the world. Then everybody has online, realtime access to the real public keys w/o having to resort to redundant, superfluous, stale, static digital certificates ... and can also optimize the SSL protocol chatter.
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh

another trivial case is the x9.59 financial standards protocol which was designed to preserve the integrity of the financial infrastructure for all retail payments. the paying entity has a pre-existing relationship with their financial institution. the paying entity's financial institution keeps some amount of information about their account holders. one of the pieces of information about their account holders, is the account holders' public keys. If the paying entity's financial institution already has the registered public keys for all their account holders (as well as other registered information), it is trivial to demonstrate that it is redundant and superfluous to append stale, static digital certificates to transactions sent to the paying entity's financial institution for authentication and authorization.
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

so the next step is to trivial show that when the relying party has direct access to the real information .... say in the case where the authoritative agency for domain name ownership has onfile public keys for the domain name owner and is willing to provide real time access to such information, the stale, static digital certificates (representing that registration and certification) is redundant and superfluous.

Another is any case, where a registration and certification authority is providing for online real-time business processes in support of digital signed transactions. If the relying party trusts a certification authority to issue a stale, static digital certification that can be used for authentication ... relying parties may find it even more valuable if the certification authority would provide real-time authentication and authorized transaction (even taking financial liability) for digital signed transactions. Since they can be the same registration and certification authority in both the offline, stale, static digital certificate scernario (a credential/certificate representing some real certification business process for purely authentication) and in the real-time, online support of digitally signed transactions (which can include authorization and liability in addition to the sense of any authentication).

It turns out that one of the shortcomings for accepting liability in the financial transaction case with respect to the offline digital certificate model ... is the one hundred checks of $100 each (i.e. the digital certificate can emulate the business check scenario and say the certificate is not good for any checks over $100) or two hundred checks of $5000 each. The digital certificate just provides a method for authenticating ... and potentially could include a maximum per transaction limit (following the stale, static business check methodology). However, the infrastructure was still vulnerable to aggregation attacks ... and the countermeasure was online transactions that supported not only per transaction rules but could also support aggregation rules ... something not easily done or controlled with an offline, stale, static paradigm.

The online, integrated authentication and authorization model could provide both per transaction business controls as well as aggregated business controls. Once you have an online, integrated authentication and authorization paradigm ... it makes the offline, stale, static processes redundant and superfluous.

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 17:35:39 -0600
Ken Johanson wrote:
If you can't answer to (but only delete and ignore) the obvious questions that were posed to you, then there is no conversation to he had.

And you are *still* referring to what was, instead of now, and distorting things merely to support your side (statements like "grossly overloaded" and was redundant and superfluous (again without an example) which are as much balderdash as saying that your drivers licensee being overloaded just because it makes you accountable and traceable to everyday people you may encounter).

If you don't like it, then don't use certs. For now, they're optional -- (unlike your drivers license)... even though they are far and away and growing as the most used system for secure internet protocols, whether issued by the CAs that you despise, or by your own personal CA.


re:
https://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#31 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#32 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#34 X.509 and ssh

so the stale, static, offline credential, certificate, license, diploma, letters of credit, letters of introduction methodology have served a useful business requirement in the physical world for centuries, namely providing a mechanism to represent some information to relying parties who have had no other meanss for accessing the actual information.

digital certificates are just electronic analogs of their physical world counterparts, meeting the same business requirements ... namely providing a mechanism to represent some information to relying parties who have had no other means for accessing the actual information.

so in the mid-90s there were efforts looking at chip-based, digital certificate-based driver's licenses ... as a higher valued implementation for better trust and operation.

however, it ran into some of the similar retrenchments that faced x.509 identity certificates ... even the physical drivers license contain unnecessary privacy information ... like date-of-birth, creating identity theft vulnerabilities.

the other value proposition justification was that high value business processes .... like interaction with police officers supposedly could be better trusted using the higher value and higher integrity chip-based driver licenses incorporating digital certificate technology.

however, police officers at that time were already in transition to much higher value online transactions. rather than simply relying on the information in a driver's license ... the driver's license simply provided an index into the online repository ... and the police officer used it to do realtime, online access of the online respository, retrieving realtime information for authenticating and otherwise validating the entity they were supposedly dealing with. Any information (other then simple repository lookup value) in the drivers license, became redundant and superfluous.

All the higher value driver license related operations, were moving to online, realtime operation ... leaving any information content requirements for driver licenses to no-value operations that couldn't justify an online operation.

If you are faced with a situation where the driver license has very defined use (say a trivial barcode to index a repository that contains your complete history and numerous biometric mechanisms for validating who you area) ... then any additional feature of a drivers license for use in no-value operations ... needs to be financially justified by the no-value operations (since they are redundant and superfluous for all the higher value business processes that can justify doing realtime online operations).

The online characteristic can also be used to help address some of the existing identity theft vulnerabilities related to driver's license. For instance, an individual can authorize ... in a manner similar to how they might digitally sign an x9.59 transaction
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

... a transaction that answers yes/no to whether they are at least 21 years old. the actual birth-date never has to be divulged ... the certification authority just responds yes/no in a manner similar to how certification authorities response approved/declined to existing realtime, online financial transactions.

This is sort of the set "FAST" transaction proposals by FSTC
http://www.fstc.org/

that could even ride the same 8583 rails as existing financial transactions ... but in a manner similar to answer yes/no to financial transactions (w/o disclosing things like current account balance or transaction history) ... could answer yes/no to other kinds of certifications.

some other past posts mentioning the digital certificate model for drivers licenses from the mid-90s ... and why it sort of evaporated.
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/aepay2.htm#position AADS NWI and XML encoded X9.59 NWI
https://www.garlic.com/~lynn/aepay4.htm#comcert5 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay6.htm#itheft "Gurard against Identity Theft" (arrived in the post today)
https://www.garlic.com/~lynn/aepay12.htm#3 Confusing business process, payment, authentication and identification
https://www.garlic.com/~lynn/aadsm5.htm#ocrp3 Online Certificate Revocation Protocol
https://www.garlic.com/~lynn/aadsm7.htm#idcard AGAINST ID CARDS
https://www.garlic.com/~lynn/aadsmail.htm#liability AADS & X9.59 performance and algorithm key sizes
https://www.garlic.com/~lynn/aadsm11.htm#37 ALARMED ... Only Mostly Dead ... RIP PKI
https://www.garlic.com/~lynn/aadsm11.htm#38 ALARMED ... Only Mostly Dead ... RIP PKI ... part II
https://www.garlic.com/~lynn/aadsm13.htm#1 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#4 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#5 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm14.htm#13 A Trial Balloon to Ban Email?
https://www.garlic.com/~lynn/aadsm15.htm#1 invoicing with PKI
https://www.garlic.com/~lynn/aadsm17.htm#47 authentication and authorization ... addenda
https://www.garlic.com/~lynn/aadsm19.htm#48 Why Blockbuster looks at your ID
https://www.garlic.com/~lynn/aadsm20.htm#42 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#20 Some thoughts on high-assurance certificates
https://www.garlic.com/~lynn/2001.html#62 California DMV
https://www.garlic.com/~lynn/2001f.html#77 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001k.html#6 Is VeriSign lying???
https://www.garlic.com/~lynn/2001l.html#29 voice encryption box (STU-III for the masses)
https://www.garlic.com/~lynn/2001n.html#56 Certificate Authentication Issues in IE and Verisign
https://www.garlic.com/~lynn/2002m.html#20 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002n.html#40 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#10 Are ssl certificates all equally secure?
https://www.garlic.com/~lynn/2002p.html#9 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003m.html#21 Drivers License required for surfing?
https://www.garlic.com/~lynn/2004i.html#4 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005g.html#34 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005i.html#33 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005l.html#32 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005t.html#6 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2005u.html#37 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2006.html#37 The new High Assurance SSL Certificates

X.509 and ssh

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Tue, 11 Apr 2006 18:30:22 -0600
Ken Johanson wrote:
Let me write you 100 $100 checks... will you cash them all and hands me the goods based on my self-generated, unvouched public key? Really?

as stated in other post
https://www.garlic.com/~lynn/2006f.html#32 X.509 and ssh

this was a real business fraud problem ... which businesses responded with a stale, static credential/certificate (dating some number of decades) in the form of a check embossed with a stale, static, per transaction business rule embossed each such credentials (check) with per transaction limit. some of the digital certificates for financial purposes even proposed having similar stale, static, per transaction business rule embosed in the electronic digital certificate (aka not good for transactions with value greater than xxxx).

the problem that started being addressed sometime in the 80s with payment cards that were used for realtime, online operation (where the backend business rules could include a broad range of per transaction specifications ... but there could also be aggregation business rules ... that representing real time rules spanning a collection of transactions) ... was fraud being done by doing using 100 $100 checks or possibly 200 $5000 checks (whatever the per transaction limit was).

the relying parties liked it much better because not only were they getting realtime authentication responses back from the authoritative agency (or certification authorities, if you wish), but they were also getting back realtime authorization responses (which you couldn't do using the offline stale, static credential/certificate paradigm)

for the various digital certificate-based financial transaction proposals ... they went thru all the legacy business rule proposals that had been used in stale, static credentials/certificates in the past. however, it was trivial to show that such stale, static legacy approaches couldn't address the aggregation countermeasures implemented in online, realtime operations. furthermore, any authoritative agency supporting such infrastructures could easily have all the information necessary for both doing realtime authentication and realtime authorization .... making any stale, static digital certificate operation redundant and superfluous.

so the stale, static, offline credential, certificate, license, diploma, letters of credit, letters of introduction methodology have served a useful business requirement in the physical world for centuries, namely providing a mechanism to represent some information to relying parties who have had no other means for accessing the actual information.

digital certificates are just electronic analogs of their physical world counterparts, meeting the same business requirements ... namely providing a mechanism to represent some information to relying parties who have had no other means for accessing the actual information.

for business operations that have moved into an online paradigm with realtime, online access to all necessary information ... the stale, static digital certificate methodology represents an archaic, obsolete, redundant and superfluous paradigm.

Over my head in a JES exit

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Over my head in a JES exit
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 12 Apr 2006 04:54:42 -0600
Brian Inglis writes:
Couldn't just migrate frozen merged source, object, and load libs over and reverse engineer the zaps from the diffs? Or was nothing available internally like SuperC(?IIRC) and SuperZap?

the structure of the source is the same in both instances ... and so you could possible devise translation rules from one representation of source changes to different representation of the same source changes. however there is still a lot of metadata in both environments about what were specific source changes related to, clusters of specific source changes spanning multiple source files and what feature, function, and/or bug fixes the sources were related to.

my impression was the later mvs based internal management system also tended to be somewhat closed ... and some of the metadata information couldn't just be provided in static file form ... but may have to be executed as keyboard input. automation here then moves into hllapi type operation (before PCs even came on the screen).

it may have been possible to write customized parasite/story for automating the keyboard input. misc. past parasite/story posts:
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2003i.html#73 Computer resources, past, present, and future
https://www.garlic.com/~lynn/2003j.html#24 Red Phosphor Terminal?
https://www.garlic.com/~lynn/2004e.html#14 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006.html#3 PVM protocol documentation found
https://www.garlic.com/~lynn/2006c.html#14 Program execution speed

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Over my head in a JES exit

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Over my head in a JES exit
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 12 Apr 2006 05:05:27 -0600
for even more drift, at a meta-level ... many of the "exit" constructs were invented as part of the oco effort (wars).

cp/cms not only had a multi-level source update infrastructure, but also shipped full source as well as the complete multi-level source update operation to customers.

hasp was shipping full source also.

customers accustomed to building products from source ... could handle customized feature/function by just modifying the source.

in the oco (object code only) transition, products that had provided full source maintenance operation to customers ... had to invent some number "exits" to support the more common customer feature/function customization.

misc. past postings mentioning oco transition:
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001e.html#6 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#22 Hercules, OCO, and IBM missing a great opportunity
https://www.garlic.com/~lynn/2002c.html#4 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2002p.html#3 IBM OS source code
https://www.garlic.com/~lynn/2002p.html#7 myths about Multics
https://www.garlic.com/~lynn/2003k.html#46 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003k.html#50 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004d.html#19 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004e.html#10 What is the truth ?
https://www.garlic.com/~lynn/2004m.html#47 IBM Open Sources Object Rexx
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2004p.html#5 History of C
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2004p.html#21 need a firewall
https://www.garlic.com/~lynn/2005c.html#42 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#34 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005e.html#35 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#15 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005j.html#29 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#41 TSO replacement?
https://www.garlic.com/~lynn/2005r.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005t.html#42 FULIST
https://www.garlic.com/~lynn/2005u.html#57 IPCS Standard Print Service
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 12 Apr 2006 04:43:23 -0600
ref:
https://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#31 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#32 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#34 X.509 and ssh
https://www.garlic.com/~lynn/2006f.html#35 X.509 and ssh

so another scenario comparing online based transaction operation and offline credential/certificate operation are door badge systems.

the legacy, oldtime door badge systems have been offline. as the higher value online door badge systems came into being, the offline door badge systems have moved into the lower cost market niche.

in the 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you have
something you know
something you are

an electronic door badge represents something you have authentication. in the legacy offline door badge system, the badge would present some unique information that validates a specific badge and differentiates it from other badges (aka unique something you have authentication) and then some other code that provides the basis for authorization decision (by a specific door processor). before the introduction of online door badge systems, the authorization information provided by the badge for the offline paradigm was becoming more and more complex ... listing specific classes of doors that were authorized and/or specific doors that it was authorized. this overloaded the function of the badge with both something you have authentication as well as providing authorization information for the offline door badge processor.

with the introduction of online door badge, the construct served by the badge could return to purely a unique something you have authentication. The badge provides unique information to differentiate itself from other badges; the online system looks up that specific badge entry and finds the corresponding entity (associated with the badge), including the entity's permissions and determines whether or not to authorize the door to open. The online system typically can support a lot more complex set of realtime permission rules ... compared to features provided by offline door badge systems ... not only stale, static per transaction rules, but say a capability to raise an alarm if the same something you have badge is used to enter multiple succesive times w/o having been used for exits between the entries. This is comparable to the richer set of features in online payment card operation ... where set of authorization rules of an online paradigm can include authorization spanning aggregation of transactions (like has an aggregate set of transactions exceeded the entity's credit limit) as opposed to single transaction operation.

now various door badge systems ... whether online or offline have tended to implement static information to represent a unique physical object (something you have authentication). systems relying on static data representation are a lot more vulnerable to evesdropping and/or skimming and vulnerable to replay attacks (possibly with counterfeit/cloned badges).

a countermeasure to such skimming/evesdropping (in support of replay attacks) is a badge with a chip that implements public/private key technology. the chip has a unique private key which is used to digitally sign randomly generated data. the relying party then validates the digital signature (with the corresponding public key) as part of establishing a unique token in support of a something you have authentication paradigm.

the public/private key door badge, can be implemented in an offline door badge system, where the door badge is providing both authentication and authorization information. The chip in the door badge not only returns a digital signature to the door reader but also a digital certificate. the digital certificate contains the public key necessary to validate the digital signature (performing the authentication part of the operation) and the other information in the digital certificate provides the basis for the door deciding whether or not to open (authorization). this technology change is a countermeasure to various kinds of replay attacks against offline door badge systems.

an online public/private key door badge system implementation doesn't require a digital certificate since the online entry contains the public key necessary to validate a digital signature (for a specific badge as part of establishing a specific unique badge in support of something you have authentication). the online entry also contains the permission specifications in support of the realtime, online authorization rules. in the higher value, online door badge system operation, any digital certificate (providing information for both authentication and authorization functions) is redundant and superfluous ... since that information is part of the operation of the online door badge system The online systems tend to have more sophisticated authorization rule implementation, which tend to also be more valuable and more complex than possible in an offline door badge system using stale, static authorization rules.

The digital signature paradigm can be viewed as providing dynamic data authentication countermeasure to skimming/evesdropping (and replay attacks typically associated with static data authentication infrastructures). However, the offline paradigm, using a digital certificate tends to retain stale, static authorization rule implementation. The transition to online operation can take advantage of both the dynamic data authentication (provided by the digital signature paradigm) and also dynamic authorization rules (provided by having aggregated and realtime information available in the online system).

so the stale, static, offline credential, certificate, license, diploma, letters of credit, letters of introduction methodology have meet a useful business requirement in the physical world for centuries, namely providing a mechanism to represent some information to relying parties who have had no other means for accessing the actual information.

digital certificates are just electronic analogs of their physical world counterparts, meeting the same business requirements ... namely providing a mechanism to represent some information to relying parties who have had no other means for accessing the actual information.

a few past postings discussing the offline/online door badge system comparison:
https://www.garlic.com/~lynn/aadsm16.htm#12 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/2003b.html#50 Authentication w/o user ids and passwords
https://www.garlic.com/~lynn/2004j.html#15 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2005g.html#34 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2006c.html#16 X.509 and ssh

A very basic question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A very basic question
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 12 Apr 2006 05:15:27 -0600
Phil King wrote:
Ted,

A general technique I have used successfully to estimate the CPU usage (or instruction path length) of a "black box" piece of code is as follows:

(1) Measure the resource usage of the standard job - call this R1

(2) Modify the job (in the case your COBOL program) to call the "black box" subroutine N *extra* times -- that is a total of (N+1) calls. Choose a large value of N to give you greater precision or a small value of N to reduce the total resource usage.

(3) Measure the resource usage of the modified job - call this R2.

Then the average resource use of the black box = (R2-R1)/N


at a very high conceptual level ... this is basically what multiple regression analysis does w/o having to modify the application (other than having sufficient activity information about application runs). basically it uses the differences in activity across multiple executions to approximate the relationship between different kinds of activity.

reference to mra
https://www.garlic.com/~lynn/2006f.html#22 A very basic question

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 13 Apr 2006 09:49:51 -0600
Steve O'Hara-Smith writes:
The ~1960 baby boom became the 1970s school crisis, became the 1980s youth unemployment crisis, became the 1990s and early 21st century middle management bulge and will become the 2020s pensions crisis.

I've mentioned before Boyd's slightly different view about top-down, heavyweight management structure ... recent post
https://www.garlic.com/~lynn/2006f.html#14 The Pankian Metaphor

I was recently watching a talk by Comptroller General on CSPAN and mentioned his talk being on the web, America's Fiscal Future:
http://www.gao.gov/cghome/nat408/index.html

where some numbers are provided. For one set of numbers he made some comment that some of the numbers (from congressional budget office) were extremely conservative based on four assumptions. He read off each assumption and asked people in the audience (meeting of state governors) to raise their hand if they believed the assumption was valid (nobody raised their hand for any one of the assumptions). The point was the probable financial impact would be significantly more severe (assuming any set of realistic assumptions).

He repeatedly made the point that he believed that nobody in congress has been capable of even simple school math for at least the past 50 years.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Fri, 14 Apr 2006 13:56:23 -0600
Keith writes:
Nonsense. The bottom quintile in the US have a negative (-5.4%, IIRC) income tax rate.

re:
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor

there is the periodic line about numbers being made to lie. statements about tax reform benefit the "rich" ... frequently is because the "rich" are the ones paying the taxes.

So here is fairly representative analysis, "Tax Distribution Analysis and Shares of Taxes Paid - Updated Analysis"
http://www.house.gov/jec/publications/109/Research%20Report_109-20.pdf
http://www.house.gov/jec/publications/109/rr1094tax.pdf

Distribution is shown in two ways ... the distribution of percent of taxes paid as a function of income (doesn't differentiate whether this is gross income or net income ... although a lot of news recently is about how AMT rules are catching more and more). The other distribution is the percent of total federal taxes paid as a function of income.

in any case, recent data from the above report:
IRS data for 2003, the most recent available, show that the top half of taxpayers ranked by income continue to pay over 96 percent of Federal individual income taxes while the bottom half accounts for just less than 3.5 percent. The data show the highly progressive nature of the Federal income tax.

The top one percent of tax filers paid 34.27 percent of Federal personal income taxes in 2003, while the top ten percent accounted for 65.84 percent of these taxes. To be counted in the top one percent taxpayers needed Adjusted Gross Income (AGI) of $295,495 or more. The 2003 AGI cut-off amount for the top ten percent is $94,891, while the cut-off amount for the top/bottom fifty percent is $29,019.


... snip ...

also, table from the same report ... doesn't do further break out of bottom 50 percent ... so doesn't show any actual data about bottom 25 percent actually having negative tax (just that the bottom 50 percent paid only 3.46 percent of the total federal income tax paid).

also from above:
Percentials ranked AGI Threshhold Percentage of by AGI on Percentiles Fed. personal income tax paid Top 1% $295,495 34.27% Top 5% $130,080 54.36% Top 10% $94,891 65.84% Top 25% $57,343 83.88% Top 50% $29,019 96.54% Bottom 50% <$29,019 3.46%

... snip ...

Of course, the above doesn't show situations where gross income is significantly larger than adjusted gross income. However, with the top ten percent (of the population by AGI) paying over half of total Fed. personal income tax paid (65.84percent), then any personal income tax reform is likely to affect them much more than the bottom half of the population (by AGI), which accounts for only 3.46percent of total Fed. personal income tax paid.

Conversely, for a category (bottom 50percent) that is already paying extremely little of the total federal personal income tax (3.46percent), then any change isn't likely to have much effect on them. If you are already close to paying zero of the total federal personal income tax, it is hard to have you pay less than zero (modulo using income tax refunds as mechanism for low-income social payments method and referring to them as negative income tax, verging on one of those 1984 scenarios changing what words mean).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Fri, 14 Apr 2006 20:36:54 -0600
greymaus@gmaildo.tocom wrote:
I would think that I am arguing from a different direction that you, My point is that as a certain point, fairly low in the income scale, one would be paying more of your income in tax that people higher. Its an immensely complicated situation. For instance here, someone in

in my original post,
https://www.garlic.com/~lynn/2006f.html#42 The Pankian Metaphor

i postulated that you could have numbers for both types of distribution scenarios ... one is distribution showing pecent of income paid for taxes by group and the other distribution showing percent of total federal income taxes paid by different groups. i then gave a couple references that had numbers for one type of distribution ...
http://www.house.gov/jec/publications/109/Research%20Report_109-20.pdf
http://www.house.gov/jec/publications/109/rr1094tax.pdf

again from the above (for one of the types of distributions):
This can be illustrated by one concrete example, In 2000, the Clinton Treasury Department released data indicating that a tax reduction proposal then under consideration in Congress was skewed, but the JEC staff reconstructed the data to produce what was not disclosed: the proportion of taxes paid by each income fifth would in fact be unchanged.

For example, the top fifth paid 65 percent of federal taxes before the tax relief legislation, and would pay 65 percent of total taxes after this legislation took effect.


... snip ...

so the referenced House report gives information on one of the two methods of calculating tax payment distribution. So the issue now is attempting to find same actual statistics regarding the actual percent of income paid in taxes by total income distribution.

Back to the opening statement in my original post
https://www.garlic.com/~lynn/2006f.html#42 The Pankian Metaphor

so more use of search engine:

A Comparison of Tax Distribution Tables: How Missing or Incomplete Information Distorts Perspectives:
http://www.ibm.com/smarterplanet/us/en/healthcare_solutions/article/smarter_healthcare_system.html

One of the discussion in the above reference is not about real data from distribution tables .... but about hypothetical distribution tables based on assumptions regarding future events.

However, the above reference does go into some detail about both kinds of tax calculation distributions. In the first page or two of search engine responses for tax, distribution, analysis, shares; the above seems to be most comprehensive in the treatment of the subject.

There is also the issue of what is met by income and tax rate.

A trivial example is tax code treatment of "tax-free" bonds. Say somebody invested $1m in normal bonds and earned an 8percent per annum or $80k. For calculation simplicity, they paid a marginal tax rate of 50percent (or $40K). Somebody else invests $1m in tax-free bonds earning 4percent per annum. In this situation, they also show $40k after tax revenue but have paid zero tax on that revenue. Nominally, in both cases, the parties have identical $40k after tax income on identical investment of $1m. However, in one case, somebody has paid 50percent tax rate and in the other case, the person paid zero tax. Is it "fair" or "unfair" that one of the examples (having the exact same after-tax income and the same exact total investment) paid no tax??

So one of the things that AMT (alternate minimum tax) calculations is trying do is try and pump up both the actual tax individuals pay as well as percent tax paid (almost regardless of the type of income). I have some recollection that some of the original arguments in support of AMT were the tax-free bond issues and not paying any tax (regardless of whether their actual after-tax income was fair or not fair).

This can also create quite a bit of conflict in some of the objectives behind tax code legislation ... in one case promoting investment in tax-free investments and then potentially turning around and penalizing those same investments. The obvious response is that a smart investor should be taking all such factors into account when making investment decisions.

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sat, 15 Apr 2006 17:07:40 -0600
Paul Ding writes:
Showing that most general income tax revenue comes from high-income individuals answers the wrong question. To show that the federal tax system is progressive, you need to show is the amount of revenue AS A PERCENTAGE OF INCOME is higher for those with high incomes.

the post
https://www.garlic.com/~lynn/2006f.html#42 The Pankian Metaphor

was trying to actually focus on the evaluation methodology as opposed to specifics ... which I hopefully explained better in the subsequent post
https://www.garlic.com/~lynn/2006f.html#43 The Pankian Metaphor

along with its reference to: A Comparison of Tax Distribution Tables: How Missing or Incomplete Information Distorts Perspectives:
http://www.ibm.com/smarterplanet/us/en/healthcare_solutions/article/smarter_healthcare_system.html

my original post in the series
https://www.garlic.com/~lynn/2006f.html#42 The Pankian Metaphor

started off citing the comptroller general's talk (carried on cspan)
http://www.gao.gov/cghome/nat408/index.html

besides observing several times that none of the responsible entities for at least the past 50 years seem capable of doing simple math operations, another observation (in the talk) is the total lack of valid metrics regarding whether programs are meeting objectives at all (i.e. which programs can be considered successful and which program aren't successful ... based on something else than how much total money is involved).

this has some resonance for me because of heavy involvement as an undergraduate doing dynamic adaptive resource management algorrithms, which required extensive instrumentation and evaluation (i.e. whether the results corresponded with predictions in any way)
https://www.garlic.com/~lynn/subtopic.html#fairshare

an early experience in university datacenter metrics and accountability was the switch-over of the univ. datacenter to a quasi profit center. that datacenter had been operating under flat budget from the state legislature. it was then supposed to provide services to the whole univ. (both education and administrative). The result was lots of individuals and groups constantly badgering the datacenter for special consideration with enormous amounts of politics. The datacenter management was constantly being called on to make value judgement calls about the relative merits of the special interests. The change over was the univ. datacenter went to the state legislature and got a change in status to be pay-as-you-go profit center. The datacenter got to provide services and were paid for those services. If some special interest group wanted special consideration ... it was no longer arm twisting datacenter management ... they had to got to the state legislature for additional funding to pay for the services. The state legislature could put all pending funding consideration requests on a consistent level playing field (and it was no longer necessary for the datacenter management to make those value judgments, especially when lacking sufficient overall information to accurately prioritise the demands).

Part of the problem is allowing special interest considerations to permeate all aspects of the infrastructure ... is you don't know 1) if it is effective (where are the metrics and standards) and 2) individual operations may not have sufficient information to make a valid value judgement. For instance, the $300 car ... how does anybody know if it is being registered by somebody really in need of assistance or not. Additionally, if it is truely a needy person that requires it for commuting ... then the metric might be both the level of need and the commute requirement.

circa 1994, the census department published some study that included statement that half of all manufacturing jobs in the US are subsidized ... aka half of all manufacturing employees were getting more benefits than the value they provided. i then did simple analysis that showed if the trend continued, in 25 years, 97precent of the jobs would be subsidized (i.e. only three percent of the people would be providing more benefit than the value they received). previous posts on the subject:
https://www.garlic.com/~lynn/2002q.html#9 Big Brother -- Re: National IDs
https://www.garlic.com/~lynn/2004b.html#42 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004d.html#18 The SOB that helped IT jobs move to India is dead!

one issue was that since the nature of the subsidy benefits spanned a broad range of gov. and industry financials (including salary); you could have individuals benefitting from subsidies and it not showing up in the statistics of any gov. programs (the individuals could even be paying significant taxes on their subsidized saleries).

the other issue was that by the very nature of subsidies, the subsidized benefits had to be taken from someplace where the value produced far exceeded the costs. if half the employees were providing value in excess of the benefits they received ... some of the excess could be used to subsidize the other half. as the number of employees providing value in excess of the benefits (they received) approached three percent, it would be harder and harder to find sources to maintain the subsidies (and there may be points that result in discontinuities and non-linear adjustments)

if the aggregate total isn't accurately being measured and reported, then it is impossible to accurately make predictions and/or value judgments about specific strategies or programs.

another metric i found interesting was transportation taxes vis-a-vis transportation costs ... i.e. statements about purpose of fuel taxes are there to maintain the transportation infrastructture, in which case should all fuel taxes be shifted to heavy trucking
https://www.garlic.com/~lynn/2002j.html#41 Transportation
https://www.garlic.com/~lynn/2002j.html#42 Transportation
https://www.garlic.com/~lynn/2004c.html#20 Parallel programming again (Re: Intel announces "CT" aka

or should purely recreational driving be used to subsidize transportation infrastructure costs done primarily for support of heavy trucking?

semi-related theme i've frequently repeated is that the overhead of resource management should be proportional to the benefits of having that management (originally applied to computer system resource management). If heavy-weight management infrastructures aren't able to do any better than random decisions (and possibly much worse), then random decisions can be much more cost/effective. I actually used something like that to dynamically change a page replacement algorithm for one that approximated least-recently-used to psuedo-random (in situations were LRU replacement appeared to be counter productive)
https://www.garlic.com/~lynn/subtopic.html#wsclock


previous, next, index - home