List of Archived Posts

2005 Newsgroup Postings (01/19 - 01/31)

8086 memory space
Foreign key in Oracle Sql
Relocating application architecture and compiler support
[OT?] FBI Virtual Case File is even possible?
Relocating application architecture and compiler support
Relocating application architecture and compiler support
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Relocating application architecture and compiler support
Relocating application architecture and compiler support
Factoring problem, solved
Relocating application architecture and compiler support
[Lit.] Buffer overruns
Relocating application architecture and compiler support
something like a CTC on a PC
something like a CTC on a PC
[Lit.] Buffer overruns
[Lit.] Buffer overruns
CAS and LL/SC
CAS and LL/SC
[Lit.] Buffer overruns
[Lit.] Buffer overruns
The Mac is like a modern day Betamax
360 DIAGNOSE
stranger than fiction
360POO
CAS and LL/SC
Relocating application architecture and compiler support
Relocating application architecture and compiler support
M.I.T. SNA study team
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Noobie SSL certificate problem
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Relocating application architecture and compiler support
[Lit.] Buffer overruns
Relocating application architecture and compiler support
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
The mid-seventies SHARE survey
[Lit.] Buffer overruns
[Lit.] Buffer overruns
The mid-seventies SHARE survey
[Lit.] Buffer overruns
The mid-seventies SHARE survey
[Lit.] Buffer overruns
History of performance counters
[Lit.] Buffer overruns
The mid-seventies SHARE survey
The mid-seventies SHARE survey
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Keeping score
History of performance counters
The mid-seventies SHARE survey
[Lit.] Buffer overruns
[Lit.] Buffer overruns
The mid-seventies SHARE survey
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns

8086 memory space

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 18:54:37 -0700
Brian Inglis writes:
That was the automated startup virtual machine that started up the service machines. I was trying and failing to remember the VM standard SECUSER machine that received and responded to or forwarded system or service machine messages: was it called OPERATOR?

the standard operator login has been operator ... dating back to cp/67 ... so that was already in use ... but could use SECUSER to get copies all the standard system messages normally going to OPERATOR.

a small secuser posting from vmshare archive from 1/12/83:
http://vm.marist.edu/~vmshare/browse.cgi?fn=SECUSER&ft=NOTE

misc other secuser postings
http://vm.marist.edu/~vmshare/browse.cgi?fn=SEND&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=SPY&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=TMON&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=XASETSEC&ft=NOTE
http://vm.marist.edu/~vmshare/browse.cgi?fn=PIPE95&ft=NOTE

PROP (PRogrammable OPerator showed up in the early 80s) misc. PROP discussion starting in 83
http://vm.marist.edu/~vmshare/browse.cgi?fn=PROP&ft=MEMO
http://vm.marist.edu/~vmshare/browse.cgi?fn=NO:OPER&ft=MEMO

some discussion from vm/esa mailing list:
http://listserv.uark.edu/scripts/wa.exe?A2=ind9908&L=vmesa-l&F=&S=&P=8551

current programmable operator description from the administrative and planning guide:
http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/hcsg2a00/5.0?DT=20020402142448

standard/default virtual machine for PROP is the OPERATOR virtual machine.

and for something (almost) completely different:
http://www-1.ibm.com/servers/eserver/xseries/systems_management/xseries_sm/vmm.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Foreign key in Oracle Sql

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Foreign key in Oracle Sql
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Wed, 19 Jan 2005 19:50:54 -0700
DA Morgan writes:
I would love to have access to your resources for class preparation work. And you may well be correct that system/r code is the basic kernel in DB2 ... but the fact that one came before the other is not a guarantee that DB2 was built on a system/r foundation or just on its design. In one sense, given where Larry used to work, one could say that Oracle too was built on system/r. Would you agree?

i was involved in the tech transfer of system/r to endicott for sql/ds. i wasn't involved in the transfer of the code back to stl for db2 (sjr/bldg28 where system/r was done is about 10 miles north of stl/bldg90 where db2 was done ... both in south silicon valley while endicott is on the opposite coast). several years ago, the primary catcher in endicott had his 30th corporate anniversary ... and i presented him with an email log from the period.

however, in disucssions with one of the people in the following meeting
https://www.garlic.com/~lynn/95.html#13

who had been working at STL at the time, he claims to have almost single handed managed the transfer of the sql/ds code from endicott back to stl (for db2).

however, there is lots of work done on that code after it reached stl ... both by people in stl and multiple people at sjr/bldg28 having worked on system/r (as well as code that was adapted from other projects).

lots of past system/r references/posts
https://www.garlic.com/~lynn/submain.html#systemr

of course there is the sequal reunion web site:
http://www.mcjones.org/System_R/
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95.html

a couple specific items about DB2 from the above index:
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-SQL_DS.html#Index287
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-SQL_DS.html#Index290
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html#Index339
some discussion of other code that went into DB2
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Spreadin.html#Index271

there were lots of stuff going on in that period, epstein had graduated from berkeley (out of the ingres project) and was CTO at BLI ... which had a lot of machines in the market. he left BLI to go first to teradata and then form sybase. when he left bli, there were some number of BLI people lurking(?) around bldg.28 to backfill epstain's position. One of the luckers(?) had been a disk engineer at the plant site in the 60s and had been snarfed up in the Shugart raids ... they may have even been having meetings at some of the same locations that Shugart had used (anyway i had some number of after work meetings about whether to leave or not to leave).

minor epstein reference .... the two BLI name sakes had most recently come out of memorex:
https://en.wikipedia.org/wiki/Ingres
some discussion from the sql reunion site:
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Teradata.html
one of my shugart references:
https://www.garlic.com/~lynn/2002.html#17 index searching

there was from scratch open system rdbms done for os2 & aix (i believe code named shelby, circa 1989, primarily at the time focused on os2) that was also announced as db2. some of the shelby stuff was also tied up w/transarc (aka cmu camelot).

now from the dlm work that we had done somewhat implied in this description
https://www.garlic.com/~lynn/95.html#13

there were a couple people working on (mainframe) db2 that commented that if we actually did that w/oracle ... it would put things five years ahead of where they were.

and totally unrelated to most anything
http://www.sleepycat.com/company/management.shtml
and one of the people mentioned in the above reference also did consulting work on ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp
and for even stranger reference:
http://dune.mcs.kent.edu/~farrell/sa96/notes/sendmail/usenix.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Thu, 20 Jan 2005 09:47:44 -0700
CBFalconer writes:
This technique has been around for a long time, and is the heart of the old CP/M PRL (page relocatable) format. I had it highly automated, as exemplified by the program CCINSTAL and batch job MAKEDDT.JOB contained in ddtz27.zip. My CCINSTAL code is dated 1986, and was not new then. You can examine it all at:

http://cbfalconer.home.att.net/download/cpm/ddtz27.zip


i had done stuff in the early 70s for relocatable shared segments ... i.e. the same code running concurrently in the same shared segment that resided at different virtual addresses in different virtual address spaces ... random past posts on the subject
https://www.garlic.com/~lynn/submain.html#adcon

i was trying to deal with the problem from a structure that was basically os/360 conventions with relocatable adcons .... a feature of executable binaries when stored on disk ... and which would be updated with fixed addresses when the binaries were brought into memory for execution. the problem was that the embedded fixed addressess would be unique to specific location of the executable binary. basically i had to make addresses relative to value that could be calculated on the fly.

not quite ten years earlier, tss/360 had dealt with the issue by separating the code and addresses (as data) ... and that the addresses could be located in private memory area separate from the instruction.

there were other systems from the 60s that also addressed the same issue.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[OT?] FBI Virtual Case File is even possible?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT?] FBI Virtual Case File is even possible?
Newsgroups: comp.arch
Date: Thu, 20 Jan 2005 12:35:01 -0700
eugene@cse.ucsc.edu (Eugene Miya) writes:
CIA Virtual Case File is even possible? NSA Virtual Case File is even possible? G[S]IA Virtual Case File is even possible? NRO Virtual Case File is even possible?

marginally related posting from comp.databases.theory
https://www.garlic.com/~lynn/2005.html#55

with historical reference to one of those agencies ... this has embedded prior post that also makes obtuse reference to one such agency.
https://www.garlic.com/~lynn/2005b.html#1

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Thu, 20 Jan 2005 12:50:04 -0700
CBFalconer writes:
I thought the 360 architecture handled relocation very simply from the beginning, with base registers.

standard instructions used register address plus 12bit displacement. however os/360 compilers and assemblers generated intermediate executables that had instructions and data intermixed ... including address constants that conventional programs would load into registers for use in "relocation".

there was administrative bookkeeping kept at the end of each such file that listed the location (within the program) of each such address constant and some (possibly symbolic) information on how to initialize the address constant when the program was loaded into memory. This administrative information was commonly referred to as relocatable adcons (ADdress Constants) .... but they actually became fixed address constants at the time the program was loaded into memory for actual execution (aka the executable image out on disk and the executable image resident in memory tended to defer at least by the swizzling that went on for such address constants).

so 360 hardware architecture went to all the trouble of making instruction execution relocatable ... and then, effectively, the os/360 software guys took some short cuts and established conventions making the actually executable images in memory bound to fixed address location.

tss/360 was a parallel operating system effort that was targeted at time-sharing (aka TSS .. Time Sharing System) designed for 360/67 ... which was the only 360 with virtual memory capability. tss/360 operating system conventions went a lot further towards trying to make sure that the executable image on disk was exactly the same executable image running in memory ... and preserving the relocatable 360 hardware instruction paradigm.

In os/360 operating system convention the "relocatble address constants" that were sprinkled thru-out the binary image on disk had to be swizzled as part of bringing the image into memory for execution (and at the same time bound the in-memory image to a specific address).

TSS/360 had a one-level store, page-mapped orientation ... much more orientated towards what was on disk and what was in memory would be identical (and address independent).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Thu, 20 Jan 2005 18:10:23 -0700
glen herrmannsfeldt writes:
To do relocation using base registers would require that the OS know which registers contained addresses. There is no hardware support for that.

however, there was one hardware mechanism of getting a value of the current instruction into a register for addressability ... w/o there having been loaded from some sort of address constant ... standard convention of

BALR  R12,0
                USING *,R12

assembler convention and could also be used by compilers. Normally, branch-and-link was used for calls to subroutines with the supplied to branch address ... and at the same time saving the address of where the call came from. The above sequence was special case of branch-and-link, where it didn't actually take any branch but did establish the current address in a register (w/o requiring any address constant).

a fairly typical os/360 calling convention is something like:


                L     R15,=A(subroutine)
BALR  R14,R15

or

L     R15,=V(subroutine)
BALR  R14,R15

where =A(subroutine) and =V(subroutine) become address constants that are internal to the program. In standard os/360, address constants are embedded inside the program and administrative detail is added at the end of the program which the program loader uses to swizzle all address constants to their absolute value when the program is loaded.

While the standard hardware didn't provide relocation of base registers .... it did provide the BALR convention of establishing an address on the fly w/o requiring (fixed) address constant and it didn't preclude software conventions that avoided the use of fixed address constants embedded in the executable image of the code.

TSS/360 created a different type of convention where on entry to a program, certain registers had preloaded values ... one of the registers contains a pointer to block of storage outside the executable code image ... where various things like address constants were located. The other convention was something like this


                L    R15,=V(subroutine-executablebase)
AR   R15,R12
BALR R14,R15

Where R12 contains "executablebase" for situation invoving displacements larger than 12bits. In os/360 terms, the value =A(subroutine-executablebase) is an absolute address constant (not a relocatible address constant requiring swizzle at program load) .... amd would be the same/constant value, regardless of the address that the program image was loaded. I used the above convention a lot in the early 70s for relocable shared segments.

One argument for =V(subroutine) address constant was that there was effectively "early binding" of the address (performed at program load time) ... and saved a register (assigning extra register for the convention) as well as saving an instruction in each calling sequence.

Now there were 360 machines that did have some additional relocation provided (not the virtual memory kind found in 360/67). System engineer on boeing account adapted a version of cp/67 (normally using paged virtual memory) to run on 360/50(?). My understanding is the 360/50 had something that I think was referred to as a DIL instruction (not part of standard 360 instruction operation) that was used in conjunction with some emulation packages (7090?) that allowed specification of base&bound .... basically a base value that the hardware added to all address calculations and then checked against the a bound value. This allowed simple contiguous address relocation (and required swapping of whole address space ... rather than the more granular paging available on 360/67.

This would be more akin to the current mainframe implementation of hardware supported virtual machines called LPARS (logical partitions) where physical memory can be partitioned into logical contiguous areas. No swapping or paging is supported by the hardware microcode for LPARs ... just the (relative) static partitioning of available physical memory. LPARS started out sort of being a significant subset of the mainframe virtual machine operating system support dropped into the hardware of the mainframe.

mainframe logical partition getting eal5 certification:
http://www-1.ibm.com/servers/eserver/zseries/security/eal5_ac.html

LPAR support even has extended to non-mainframes:
http://www-1.ibm.com/servers/eserver/iseries/lpar/
http://www-1.ibm.com/servers/eserver/pseries/lpar/
http://www-1.ibm.com/servers/eserver/pseries/lpar/faq_2.html

lots of past posts mentioning LPARs:
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#62 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#61 Estimate JCL overhead
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#33 D
https://www.garlic.com/~lynn/2001.html#34 Competitors to SABRE?
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#53 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#0 Home mainframes
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#4 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002p.html#55 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003c.html#41 How much overhead is "running another MVS LPAR" ?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#12 Why are there few viruses for UNIX/Linux systems?
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#13 CPUs with microcode ?
https://www.garlic.com/~lynn/2003n.html#29 Architect Mainframe system - books/guidenance
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2004b.html#58 Oldest running code
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#28 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#47 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004k.html#37 Wars against bad things
https://www.garlic.com/~lynn/2004k.html#43 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#41 EAL5
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#13 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#32 What system Release do you use... OS390? z/os? I'm a Vendor S
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,comp.security.unix
Date: Thu, 20 Jan 2005 20:36:30 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
You don't need hardware support to build capability-based systems. There are modern systems that use language-based mechanisms instead of hardware support. My favorite example is E.
http://www.erights.org/


tymshare had done gnosis on mainframe 370. when m/d bought tymshare, there was a spin-off company called keylogic and gnosis became keykos.

in the tymshare version, i estimated possibly 30 percent of total pathlength was devoted to accounting ... aka was time-sharing service bureau and they were looking at platform to deliver 3rd party applications and provide (relatively) accurate charge backs to the originators (in addition to partitioning and isoloation)

in the keykos incarnation ... the fine-grain accounting was taken out and at one point they claimed ability to do higher transaction efficiencies than TPF (specialized transaction operating system that grew up from the airline control program ... also now used for major financial transaction systems) on the same exact hardware.

since then there has been a microprocessor variety done called eros that claims designed for EAL7 evaluation.

some current keykos references:
http://cap-lore.com/CapTheory/upenn/
http://www.agorics.com/Library/keykosindex.html

small preface from above:
This documentation describes the design and implementation of the KeyKOS system. Initially developed at Tymshare, Inc., it was deployed in production in 1981 as a secure server processing credit card transactions while simultaneously supporting other applications. Agorics has licensed this system and the associated patents, and utilizes the fundamental ideas and technology in our ebusiness solutions.

some current eros references:
http://www.cis.upenn.edu/~eros/
http://www.eros-os.org/

small disclaimer ... i was never directly associated with tymshare and/or gnosis ... however i was brought in by m/d to do gnosis audit prelude to the spinoff.

random past posts about (high assurance) time-sharing from the 60s, 70s, and into the 80s.
https://www.garlic.com/~lynn/submain.html#timeshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,comp.security.unix
Date: Thu, 20 Jan 2005 20:56:11 -0700
and, erights references keykos:
http://www.erights.org/history/keykos.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Fri, 21 Jan 2005 13:25:05 -0700
glen herrmannsfeldt writes:
This reminds me of the way Macintosh OS used to work. (Does it still do that?) Fortunately I never wrote any low level Mac code, but you had to be careful when things could be relocated, and not store addresses across system calls that could move things around.

In the code above, you can't move things around between the AR and the BALR. The original mac didn't do preemptive multitasking, so the user program always knew when things could move and when they couldn't.

On the other hand, for a system like 680x0 with separate address and data registers, maybe it isn't so hard.


it turns out that this is slightly different operation ... i (re)wrote the displatcher, scheduler, page replacement, etc for cp/67 ... a lot of it as undergraduate ... which was shipped in the standard product. a lot of this was dropped in the morphing of cp/67 into vm/370 ... but I got to put it (and a lot of other stuff) back in when i did the resource manager for vm/370
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

because cp/67 (and vm/370) had virtual memory support ... each user had their own virtual address space ... where pages could be brought in and out of real storage at arbitrary real addresses.

the problem i had in the early 70s with location independent code ... was that i wanted to be able to page-map applications to files on disks ... and that multiple different virtual address spaces could share the same exact (r/o) page-mapped information.

the problem with the standard os/360 model was that when executable files out on disk were mapped into memory (virtual or real), the program loader had to run thru arbritrary locations in the executable image ... swizzling arbritrary address constants into different values (randomly spread thruout the executable image).

as a result, it wasn't just a simple matter of page mapping an executable image file (out on disk) into a virtual address space ... there was still all this address constant swizzling that had to be done before the program could actually start execution. Furthermore, the default was to touch every possible page that potentially contained an address constant (that concievably might be used ... whether it actually was used or not) and do the appropriate swizzling operation. And even further complicating the process was that the swizzling operation went on within the specific virtual address space that had just mapped the executable image.

So the swizzling operation that would go on in each virtual address space ... pre-touched and changed an arbritrary number of virtual pages (whether the actually application execution would touch those specific pages or not) ... as well as making the pages chnaged ... defeating the ability to have the program image mapped into r/o shared segments that were possibly common across a large number of different address spaces. The issue that i was trying to address wasn't what might be in the registers of each individual process context in each virtual address space .... it was trying to make the physical executable storage image of the application r/o shared concurrently across a large number of different address spaces. Any modification required on the contents of that executable image, defeated the ability to have it r/o shared concurrently across a large number of different address spaces (as well as possibly prefetching and changing pages that might never actually be used).

So that was the first problem.

A subset of the relocating shared segment implemention (that I had done in the early 70s) was picked up by the product group and released under the feature name as DCSS (DisContiguous Shared Segments). I had done a page mapped file system along with the extended shared segment support ... so that it was relatively straight-forward to page map objects in the file system into virtual address spaces. The page mapped file system wasn't part of the subset of stuff that was picked up for DCSS. random posts on the page mapped file system work
https://www.garlic.com/~lynn/submain.html#mmap

They addressed the address constant swizzling problem by defining globally unique system addresses for every application that would reside in predefined virtual memory segments; loading the application at that predefined virtual memory location and saving the virtual address space image to a reserved portion of the system-wide paging system (in part because they had failed to pick up the page mapped filesystem enhancements).

So DCSS had a new kind of system command that would map a portion of a virtual address space to a presaved application image (and specify things like shared segments, etc).

So there were several limitations ... one it required system wide coordination of the presaved applications as well as system privileges to setup stuff (i.e. individual department couldn't enable their own private applications).

A large installation would have more applications defined in this infrastructure than could fit in a single 16mbyte virtual address space ... and as a result, there had to be careful management of applications that were assigned conflicting predefined, preswizzled virtual addresses. While no single user was likely to try an map all possible applications into a single address space at the same moment ... it was possible that a single user might need to map an arbritrary combination of applications (in total less than 16mbytes), some of which may have conflicting, pre-assigned and swizzled virtual address images. As systems got bigger with wider variety of users and applications, the problem of pre-swizzled virtual application images with conflicting virtual address locations increased.

So the next solution was to have multiple pre-swizzled application images defined at multiple different virtual address location. Users would decide on the combination of page image applications and libraries that needed to have concurrently loaded ... and try and find some possible combination of the multiple different images of each application that could comfortably co-exist in the same address space.

The original implementation that i had done from the ealy 70s ... had allowed page mapping arbitrary files as virtual address apace images at arbitrary virtual address locations .... w/o needing to swizzle address constants embedded in those page mapped images ... in part because I had both done a lot of work on allowing address location independent executable images
https://www.garlic.com/~lynn/submain.html#adcon
as well as page mapped filesystem enhancements
https://www.garlic.com/~lynn/submain.html#mmap

and furthermore that an executable image could occupy a read-only shared segment ... were the same exact same pages were mapped concurrently into multiple different virtual address spaces ... at possibly different virtual addresses.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Fri, 21 Jan 2005 14:49:09 -0700
so tss/360 had page mapped filesystem as standard and an objective that a portion of virtual address space could be mapped to the executable image out in the filesystem ... and that the mapping process didn't have to track thru arbritrary virtual pages of that executable image ... swizzling address constants. The system could do the mapping and start program execution (almost) immedately ... possibly w/o having to prefetch any of the executable image.

furthermore, if other address spaces were already sharing that executable image on a r/o basis ... the mapping process would just setup the segments so the different address spaces utilized the same virtual pages.

so tss/360 had system wide convention ... assembler, applications, compilers, etc ... that positioned any address constants that needed swizzling separately from the executable image. This separate table of address constants needing swizzling would be prefetched (as part of mapping the executable file to the virtual address space) and the necessary swizzle of address constants could be done in a private area for each specific virtual address space. There then were system wide conventions on how executable code accessed the table of (swizzled) address constants .... the executable code image could be exactly the same for all virtual address spaces ... and in fact, the same exact executable image could be the same exact virtual pages (using shared segments) .... but the swizzled address constants would be located in an (tss/360 system-wide convention) area separate from the executable program image.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Factoring problem, solved

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Factoring problem, solved
Newsgroups: sci.crypt,sci.math
Date: Fri, 21 Jan 2005 15:48:19 -0700
Christian Bau writes:
It took me a while to realize, but what a lottery does is selling hope at a relatively cheap price. You don't pay for the chance of winning, you pay for the hope.

a lot of the gambling operations claim payouts of 95% or better.

state lottery may have a payout of 80% after their (direct) take (education or whatever, gov. bureaucracy administrative overhead, etc)

so lets say a community starts with $1m and it is all played on lottery each week.

first week, state keeps $200k and pays out $800k. however a good portion of that $800 payout is claimed to be subject to fed. & state income tax ... say another $300k the gov. also gets because of tax law. so at the end of the first week the community may actually only retain $500k.

second week, the community plays the remaining $500k, the state keeps $100k, and pays out $400k. Much of that $400k is also subject to income taxes, state & feds get another $150k ... leaving $250k to the community.

third week, the community plays the remaining $250k, the state keeps $50k, and pays out $200k. Much of that $200k is also subject to income taxes, state & fed get another $75k ... leaving $125k.

the gimmick is that not only does the gov. take the 20% off the top ... but because of the way the tax laws are written the pay-offs are taxed. You do get to subtract your purchases of lottery tickets from winnings ... but it is in the govs. interest to have (few) big payoffs where the winnings are also subject to big taxes. With really big payoffs, the weekly churn can mean the gov. takes closer to 50% (each week).

w/o fresh infusion of funds every week, at closer to 50% gov. retention on gov. lotteries ... the gov(s). can quickly accumulate most of the money in play.

i've seen slot machines claim 98% or better payout. their gimmick is that they can also eventually accumulate all the money if all the players keep repeatedly playing the same money (at 1% retention on each round ... you just need to have a lot larger number of rounds ... compared to weekly lottery at closer to 50% retention, the amount they directly keep plus the amount they may siphon off on each round because of tax laws).

slots can also be purely the entertainment of playing.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Fri, 21 Jan 2005 16:29:42 -0700
glen herrmannsfeldt writes:
But there is one thing that OS/360 does easily that is hard in other systems, which is what the LINK macro does. To take another program, load it into the same address space, execute it (with the original still addressable) and then return.

I remember TOPS-10, where each program was loaded with virtual origin of 0. There was a system for sending files to the print queue which involved writing the entire address space to disk, running the QUEUE program with an option to execute a program when it was done, which would then reload and continue on.

Any system with a fixed load point can't have two programs in the same address space at the same time.

In the OS/360 case, besides doing LINK, many routines, especially access methods, were directly addressable somewhere in system space. Now, there are systems which reserve parts of the address space for system and parts for user, but that doesn't help LINK.


basically you could do link-like function in both os/360 as well as tss/360. the primary difference was that os/360 had to read the executable file image into memory and then run the address constant swizzle against that were randomly sprinkled thruout the execution image.

in the tss/360 scenario ... you just had to memory map some portion of the virtual address space to the executable file image on disk (and let the paging operation fetch the pages as needed) ... and all the address constants needing swizzling were kept in a different structure.

os/360 had a single real address space orientation so that it was possible for multiple different processes to share the same executable image because they all shared the same (single, real) address space.

tss/360 had multiple virtual address spaces .... and for multiple different processes to share the same exact executable copy ... it relied on the shared segment infrastructure.

in the os/360 scenario, all program loading was to a unique real address ... since all processes shared the same, single real address space (there was no chance for program address conflict ... since each program could be assigned a unique real address as it was read into memory).

in the tss/360 scenario, different processes might load & populate their own virtual address space in difference sequences ... potentially creating address assignment conflict if there was a requirement that each application have identically assigned virtual address across all virtual address spaces.

in the os/360 scenario ... if you had two different processes, the first LINKed app1, then app2, then app3, and finally app4 and the second LINKed app2, then app4, then app3, and finally app1 ... it all fell out in the wash since there was a single global real address space.

a difference between the os/360 and tss/360 operation, is that os/360 allowed all the address constants (needing position location swizzling) to be randomly sprinkled thruout the executable image. tss/360 collected address constants (needing swizzling) into a different structure.

both could do dynamic process location specific binding at program load time. however, in the tss/360 scenario ... different processes could share the same exact executable image at different address locations ... because the executable image was a separate structure from the structure of address constants (that required location specific swizzling).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,comp.security.unix
Date: Fri, 21 Jan 2005 17:58:08 -0700
"Douglas A. Gwyn" writes:
I didn't say you did. I said it was a shame that there isn't more support for capas in hardware. To build a really hackerproof system one needs to lock it down tight all the way down to the lowest accessible level.

a lot of cisc is business as usual.

future system (FS) was going to be a super-duper advanced architecture with everybody's favorite feature built in ... including all sorts of integrity and capability stuff.

i didn't make myself popular with the FS crowd since I was claiming that we had some stuff as good or better currently running on more traditional hardware ... than some of the stuff being designed for FS.

FS was eventually killed ... one of the nails in the coffin was the calculation was that if a FS machine was built from the very fastest 370/195 technology then available ... that all the hardware layers would have some number of application at about the same thruput as existing 370/145 (possibly a 30:1 slowdown) ... lots of post postings on FS
https://www.garlic.com/~lynn/submain.html#futuresys

the other problem that I gave FS was security for their documentation. There had been an incident where highly sensitive corporate paper documents had been copied and leaked to the press. This led to an investigation and all corporate copying machines retrofitted with transparent serial numbers on the glass that would show up on every copied page (similar but different to current stuff about every laser printer having a unique fingerprint). Anyway the FS documents were considered even more sensitive ... so there was going to be as little paper as possible ... mostly softcopy restricted to very secure systems. They made the mistake of saying that even if I was in the room with the computer, i couldn't gain access to the documents. It actually took less than five minutes.

part of the observation was that the technology base we were using was being deployed in commercial time-sharing service bureaus with fairly high level of integrity and assurance (a lot of customers in the financial industry ... with concurrent customers that might even be fierce competitors). including was the work-horse system for tymshare and numerous others
https://www.garlic.com/~lynn/submain.html#timeshare

and few of these systems experienced any of the exploits and vulnerabilities commoningly seen today. I've periodically joked that I was asked in the 60s to write code to address security issues that aren't even currently being considered in today's environment (because there are so many significantly more serious security, exploit, and vulnerability problems about).

an example was a online time-sharing system we had in cambridge in the early 70s ... there was online access by MIT, Harvard, BU and other students in the boston area ... as well as online access by people from corporate hdqtrs doing data processing on corporate information at the most highest sensitivity level. There were no outside breaches ... there were a couple denial of service events that were quickly corrected.

i've periodically commented that major early RISC activity (801) in the 70s could be considered a reaction to the failure and demise of FS .... going to almost the opposite extreme in terms of hardware complexity. lots of past 801, romp, rios, risc, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801

and that anything that was being planned for FS could be done better in sophisticated software on simplified hardware.

the folklore is that some number of the FS crowd did migrate to Rochester where they recreated FS on a much smaller scale called system/38. The system/38 evolved into cisc as/400 ... and eventually as/400 migrated to risc (power/pc). So it has sort of come full-circle. I claim that at least some of the genesis for RISC came from the failure of FS ... 801, ROMP, RIOS, power, power/pc, etc. Some number of the FS survivors went to Rochester and created small scale FS with very customized hardware called the S/38, which evolved into CISC as/400 ... but eventually morphed into RISC as/400 on power/pc.

Somewhat the original 801 was that you could raise the lowest accessible level for application software and provide the necessary lockdown as part of sophisticated software layer ... w/o needing the expense of specialized hardware (somewhat epitomized by FS). In some sense that is the current RISC-based as/400.

The 801 programming language was PL.8 (pl1 subset) with CPr as operating system. The original 801 had no hardware protection domain. The PL.8 compiler would only generate correct application code and the loader/binder would validate that only valid generated PL.8 code was being loaded. Once such an application was running it could directly access all software and hardware facilities (correctly) w/o necessity for kernel calls to cross protection domains.

When it was decided to retarget an 801/RISC processor to unix workstation market, more traditional supervisor/application (kernel/non-kernel) protection mode had to be added to the hardware.

part of the point was that it is possible to build high integrity (as well as high-thruput) infrastructures w/o lots of hardware reliance ... if the overall structure is otherwise sound.

one such demonstration was/is the keykos system ... runs on standard 370 architecture ... with some number of the characteristics that were being claimed for FS.

Some topic drift ... much of the folklore has major motivation for FS were the plug-compatible controllers .... FS would provide much higher degree and complex infrastructure integration making plug-compatible controllers an extremely difficult proposition.

as an undergraduate in the 60s, one of the projects that i got to work on was university plug compatible controller that was written up as one of the genesis of the public compatible controller business (which created a major motivation for FS). mosc past posts on early plug compatible controller project
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Fri, 21 Jan 2005 19:39:56 -0700
Peter Flass writes:
So why was TSS considered so slow? (not that OS/360 was any ball of speed).

lots of reasons ... long path lengths ... over complex algorithms ... bloated code size.

lets say you had os/360 (or cms) environment that did program load ... it could queue up a single i/o that read application into memory in multiple 64k chucks ... all batched together.

tss/360 could have a large compiler laid out as 1mbyte memory mapped file ... do a memory mapping for 256 4k pages ... and then possibly individually page fault each 4k page one at a time. To some extent they got over enamored with concept of one-level store and paging and lost the fact that the purity of the concept could significantly increase the latency if all transfers had to be serialized 4k bytes at a time (as opposed to doing program loading batching in larger transfer units).

kernel was really large ... tss/360 supposedly was going to have target of 512k 360/67 ... but quite quickly grew to minimum 768k storage because of bloated fixed kernel. at one point there was a statement about a set of benchmarks down on a 1mbyte 360/67 uniprocessor and on a 2mbyte 360/67 two processor machine ... with the 2mbyte/2processor having 3.8 times the thruput of the single processor benchmark. the official coment was (while the uniprocessor system was really slow), the two processor benchmark having 3.8 times the thruput of the single processor benchmark demonstrated how advanced the tss/360 multiprocessor algorithms were (getting almost four times the thruput with only twice the hardware). the actual explanation was that the kernel was so bloated that it could hardly fit in 1mbyte configuration .... and that in the 2mbyte configuration there was almost enuf memory (not in use by the kernel) for running applications.

benchmark at the university on 768k 360/67 running tss/360 .. i believe prerelease 0.68 with four emulated users doing mix-mode fortran edit, compile and execute ... had multi-second response for trivial program edit line input. at the same time, on the same hardware with cp/67 running same mix-mode fortran edit, compile and execute had subsecond response for trivial edit line input .... but running 30 conccurent cms users ... compared to 4 tss users (although i had already done a lot of cp/67 performance enhancements).

there was folklore that when tss/360 was decommuted and the development group reduced from 1200 people to possibly 20 ... that a single person now had responsibility for the tss/360 scheduler and a large number of other modules. supposedly the person discovered that on a pass thru the kernel ... every involved kernel module was repeatedly calling the scheduler ... when it turn out that it was only necessary to call the scheduler once per kernel call (rather than every module calling the scheduler resulting in multiple scheduler calls per kernel call). fixing that is claimed to have eliminated a million(?) instructions per kernel call.

at some point in the tss/370 life ... it had been relatively stable for a long time ... with a lot work over the years on performance tweaking ... they claimed that they got the total pathlength to handle page fault (page fault, page replacement select, schedule page read, task switch, page read complete, task switch, etc) down to maybe five times better than MVS ... but still five times longer than my pathlength for equivalent sequence in vm/370.

it was in this era (late 70s) that they did the unix project for at&t ... where a kernel semantics interface was built on low-level tss/370 kernel function to support high-level unix environment. i believe this tss/370 environment for unix was called ssup.

one of the early tss/360 features that i gave them trouble about was the whole thing about interactive vs-a-vs batch. early 360/67 had 2301 "drum" for limited amount of high-speed paging space and the rest was 2311 (very slow) disks. the scenario went something like ... if some finished a line in edit and hit return (on 2741) ... tss kernel would recognize it as interactive and pull the pages that the task had been using the previous time from 2311 into memory and then write them all out to 2301 ... once that was done it would start the task ... which would allow it to page fault the pages back into memory from the 2301. when the trivial interactive task went into wait (for more terminal input), the kernel would pull all of the tasks pages off the 2301 back into memory and move them to 2311. This dragging of pages off 2311 int memory and out to 2301 and then dragging the pages off 2301 into memory and out to 2311 would occur regardless of whether there was any contention for either 2301 space or real memory space. It was a neat, fancy algorithm and they would do it every time ... whether it was necessary or not ... just because it was so neat(?).

... there is a lot more ... but that should be enuf for the moment.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

something like a CTC on a PC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: something like a CTC on a PC
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jan 2005 08:57:38 -0700
CBFalconer writes:
He's talking about the language used to implement the comm, and peripherally about the ugly things done to that language by failure to follow standards. If the implementors had followed the standard Pascal would be popular today. As it is they tied their versions to hardware and OSs that are, to all practical purposes, gone.

the standard pascals tended to be somewhat limited. i was once asked to port a 50k-60k line vs/pascal application to another platform "standard" pascal. the vs/pascal extension had to be remapped to relatively standard pascal ... which could be annoying but not difficult. the problem was that the target pascal had never appeared to ever been used for any serious application of any size ... and what is worse, the vendor had outsourced the support to an organization 12 timezones away. nearly every day there was another bug/problem ... while i could go talk to the local people ... it still took time to cycle thru the outsourced process.

vs/pascal originally had been done at los gatos lab (bldg.29) for mainframe by P & W. A backend was done for vs/pascal that also supported aix & risc processors. W went on to be vp of software development at MIPS and later was general manager of the business unit responsible for java. P went to metaware in santa cruz.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

something like a CTC on a PC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: something like a CTC on a PC
Newsgroups: alt.folklore.computers
Date: Sun, 23 Jan 2005 09:13:55 -0700
CBFalconer writes:
He's talking about the language used to implement the comm, and peripherally about the ugly things done to that language by failure to follow standards. If the implementors had followed the standard Pascal would be popular today. As it is they tied their versions to hardware and OSs that are, to all practical purposes, gone.

one might claim that C achieved critical mass and dominance over pascal ... because of the penetration of unix and unix applications ... which representd a fairly significant and available C code base.

turbo pascal achieved some market use ... but it didn't extend down thru the very core of the operating system (and I don't remember a large number of operations doing any major projects with turbo pascal ... that also shipped the code as part of any product).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 23 Jan 2005 10:30:40 -0700
"John E. Hadstate" writes:
There was a study, somewhat dated now to be sure, that claimed that an average programmer could produce 10 lines of bug-free code per day. Oddly, the number didn't seem to depend much on PL used. In other words, the study found that the programmer could produce 10 bug-free lines of assembler code, 10 lines of COBOL, 10 lines of C, etc. This supported an economic argument that since 10 lines of C was believed to represent a greater quantity of useful computation than 10 lines of assembler, C was to be preferred over assembler code as a more cost-effective Systems Programming Language.

somewhat depends on code characteristics ... a lot of code there can be a 10:1 difference between assembler and higher level language ... however, a lot of low level kernel programming ... that tends towards a lot of conditional testing and conditional execution ... tends sometimes to close to 1:1.

in the early 70s, i wrote a PLI program to analyze assembler programs ... it created an abstract representation of the machine instructions and the instruction sequence (& control flow).

one of the common failure modes in large assembler programs was register content management (at least in mainframe assembler with 16 registers ... and conventions of both short-term as well as some long-term register contents use). quite a few kernel failures from the time were essentially use-before-set problems ... some atypical control flow not establishing the register contents correctly before subsequent use.

higher level programming languages have tended to do a much better job of managing register content values ... and especially after early 80s ... also doing better job of recognizing things like use-before-set problems.

there were however some assembler routines that could almost be simpler than higher level PL. my early 70s programs attempted to take the program abstraction and output higher level program representation. Some highly optimized, compact kernel routines doing various kinds of conditional testing and branch operations ... would translate into very convoluted, nested if/then/else type structures sometimes 10-15 (or even more) levels deep. It appeared that with enuf incentive, low-level assembler "GOTO" programming can get extremely inventive.

This made an impression on me ... since I had recently been to some presentations .... that included GOTO-less programming and "super programmer" (Harlen Mills).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 23 Jan 2005 14:54:38 -0700
to bring it back a little to buffer overruns (in C) ... there were enuf registers (16) so that environments could define conventions for relatively long lived register contents (like address pointers) ... but not enuf registers that there wasn't frequently "overloading" of (long-lived) register contents (i.e. the same register used for multiple different purposes in the same application).

that condition as well as ambiguity about specific code paths possibly not establishing required register content correctly before use ... led to some number of failures (using a register as an address pointer when its current contents are from some arithmetic operation).

some number of register content failure modes were identical to dangling pointer use failure modes ... so it could take some additional investigation to distinrquish failures that failed to establish register contents from establishing register contents with dangling pointer.

however, there are some similarlities between mistakes with register content management (in assembler) and the typical mistakes with operations involving buffer lengths in typical C language environments. At least in the assembler case, there was some studies that showed the greater the "distance" between typical register content value establishiment and content use ... the greater the probability of human mistake, I know of no similar study showing the "distance" between establishment of buffers in C and subsequent use increases the probability of buffer length mistakes (although I could conjecture it to be possible).

The "distance" was typically cognitive thinking process ... between the time that the programmer had to think about putting a value in a register ... and thinking about the use of that register contents. Such cognitive distances were further aggravated if you had 15 year code and somebody new was doing maintenance ... if there was a large cognitive distance ... new maitenance might even miss the requirement for register content establishment.

The funny thing about these assembler environments was that there were very few buffer length problems. At least the environments I dealt with had established conventions that buffer lengths were carried with the buffer .... so programming convention was typically pick up the length at the same point any use of the buffer was made (very small cognitive distance between establishing the length and referencing any use involving the length). C language conventions tends to require that the programmer is cognizant of each buffer length ... where the original length value establishment may be widely separated from where there are references involving the use of the length.

for some other topic drift related to assembler analysis and abstract representation in higher level language ... while there is fairly direct correspondence in counter loops ... (at least) 370 assembler could have complex 3-4 value conditional operations ... while a lot of higher level language tends to be true/false conditional logic.

370 assembler provided for two-bit condition code setting allowing for four possible values. lots of instructions involving comparisons resulted in at least setting of three possible conditions. compares (arithmetic and logical) were less-than, equal, and greater-than. bit-twiddling operations tended to be all zero, all ones, and mixed. arithmetic operations were greater-than-zero, zero, less-than-zero.

Branch instructions were four bits ... one corresponding to each possible condition code value (conditional branches could specify any combination of possible condition code).

complex assembler code (especially kernel optimized code) could easily have a test condition followed (effectively) by three different code paths) ... regularly encountered things like:
test-condition branch-condition equal/zero branch-condition high/ones • fall-thru for low/mixed

and sometimes there were even cases of full four-value logic code paths. somewhat the difference between two-value, true/false logic paradigm ... and three or four value logic paradigm.

and if you aren't tired of topic drift ... some past posts on three-value logic ...
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004f.html#2 Quote of the Week

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: comp.arch
Date: Sun, 23 Jan 2005 15:05:41 -0700
Bernd Paysan writes:
I started with 1MB, and worked quite ok for years, until I upgraded to 4MB (Atari ST). When I switched over to a Linux 486 PC, 16 MB seemed to be "plenty". Well, not that plenty, but the 64MB of the next (Pentium) PC could do all the stuff at ease. Except GIMP, but I was quite confident that the 256MB of the Athlon PC would be more than sufficient. Now I have a higher resolution scanner, and 1GB (with an Athlon64) seems to be enough.

All I can see is that Moore's law is slow enough that the increasing amount of data can follow. I enjoy having several undo levels on high resolution pictures.


i started with 64k "personal computer" ... the university normally shutdown the computing center over the weekend (from 8am sat. to 8am mon) ... but they gave me a key to the machine room and I could use the 360/30 for 48hrs straight w/o any interruption.

it is how almost 40 years later and i've now got a 4gbyte (although linux claims that there is only 3.5bytes) "personal computer". except for the possible linux glitch, have effectively doubled the number of address bits ... 2**16 to 2**32 in just under 40 years ... although 4gbytes may do me until it has been 40 years.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: comp.arch
Date: Mon, 24 Jan 2005 10:57:50 -0700
Andi Kleen writes:
It's a BIOS glitch, Linux is not to blame.

Directly below 4GB there is the PCI+AGP memory hole which is roughly 500MB. Your BIOS is not able to map the memory "around" the hole and the memory sitting below the hole is lost. Linux cannot use it because it can only use memory that the BIOS gives to it.


bios says

4gbyte memory agp (defaulted) 128mb, only options 64mb, 128mb, & 256mb primary graphics uses agp

i don't use 3d graphics, i changed bios agp from 128mb to 64mb and it didn't make any difference, still shows 3.5gb (fedora fc3)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 24 Jan 2005 15:06:33 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
Right. It's a shame that the entire standard C library is implemented in an unreasonable way (it does exactly the wrong thing, by the above criterion). That may have something to do with why so many C programs have buffer overruns.

linux magazine, feb. 2005, pg. 38, The Oldest Trick in the Book, Understanding the buffer overflow exploit

... from above
According to NIST in the past 4 years, 871 buffer overflow vulnerabilities were exploited, commprising about 20 percent of all exploits

... snip ...

which is about the same percentage that I calculated from the CVE database.

Article mentions that the exploit first gained widespread notoriety in 1988 with the Morris worm.

for some topic drift about bitnet email worm that predates the internet worm by about a year:
https://www.garlic.com/~lynn/2004p.html#13
https://www.garlic.com/~lynn/2004p.html#16
https://www.garlic.com/~lynn/2004p.html#17
https://www.garlic.com/~lynn/2004p.html#21

note that the original mainframe TCP/IP stack had been implemented in vs/pascal. It had some issues .... getting about 44kbytes/sec thruput using 100 percent of a 3090 processor. I enhanced the stack with RFC1044 support ... and in testing at cray research between cray and 4341-clone ... it was getting 1mbyte/sec using only a modest amount of the 4341-clone processor. recent posting on the subject
https://www.garlic.com/~lynn/2005.html#51

collected posts mentioning rfc 1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

as an aside, I'm not aware of any buffer overflow exploits in this implementation.

buffer overflow posts
https://www.garlic.com/~lynn/subintegrity.html#buffer

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 24 Jan 2005 15:10:16 -0700
glen herrmannsfeldt writes:
Maybe because assembler programming requires more thought to get things right, including more thought on the lengths and checks.

i don't think so ... it had register content management problems with comparable frequency. one of the claimed advantages of using a higher level language over assembler was that the higher level language took over and automated the drudgery of register content management.

the issue wasn't so much assembler specifically ... it was in the overall environment (that the assembler was used in) there was a pervasive convention of carrying buffer length as part of the buffer.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Mac is like a modern day Betamax

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mac is like a modern day Betamax
Newsgroups: alt.folklore.computers
Date: Mon, 24 Jan 2005 23:26:19 -0700
pc writes:
on my Windows 2K, Cyberlink PowerDVD seems to. i'm guessing it has something to do with video memory, so maybe it's not a great example but i doubt Mac's have same problem (the only one i ever used was the original Lisa and i gather Macs have some BSD code in them now). (in windows, i go shift-click to get the "run-as" option.) a couple of other programs too, but i can't remember which ones at the moment.

mach; used for NeXT and the current mac/os ... microkernel developed at cmu. a number of platforms used mach at one time or another. other things by cmu in that period were the andrew filesystem, andrew widgets, camelot transaction system, etc.

another major unix-like was the distributed Locus system out of UCLA. Locus was the basis for aix/370 and aix/ps2.

DCE work had some amount of input/influence from both andrew filesystem and locus (as well as apollo, dec, ibm's DFS, etc).

some random past mach posts
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001n.html#35 cc SMP
https://www.garlic.com/~lynn/2002i.html#73 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2003c.html#45 Early attempts at console humor?
https://www.garlic.com/~lynn/2003e.html#25 A Speculative question
https://www.garlic.com/~lynn/2003e.html#33 A Speculative question
https://www.garlic.com/~lynn/2003.html#46 Horror stories: high system call overhead
https://www.garlic.com/~lynn/2003.html#50 Origin of Kerberos
https://www.garlic.com/~lynn/2003i.html#66 TGV in the USA?
https://www.garlic.com/~lynn/2003j.html#72 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004k.html#50 Xah Lee's Unixism

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 DIAGNOSE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 DIAGNOSE
Newsgroups: alt.folklore.computers
Date: Tue, 25 Jan 2005 13:00:01 -0700
pc writes:
i don't know why but i'm curious about the various uses people put the 360 DIAGNOSE instruction to.

a friend once told me of his New Zealand experience in the 60's, at Uni-Lever i think, when the IBM reps ordered him not to use DIAGNOSE. he refused and the dispute went all the way up to Lever's chairman and IBM's boss for the south Pacific, if i've got it right. settled in friend's favour on the grounds that customer is always right.

i've forgotten what he was using it for, but it had nothing to do with VM or CMS.

years later i worked at a place where most of the assembler developers didn't know anything about MVS JCL. all we used for testing was a handful of TSO Clists and TSO TEST. change control people took care of everything else. one of the interrupts was SVC 98, i think. given a 3270 world then, i thought it was pretty productive.


"real" diagnose tends to invoke machine specific microcode whose operations aren't defined in the POP (aka the POP says something about diagnose functions are defined as being model specific).

I had done a modification for CMS disk i/o while an undergraduate that used a special CCW that encapsulated the whole seek/search/read-write process in a single operation (minimizing a bunch of pathlength in CCWTRANS) and was defined to be CC=1, CSW-STORED semantics (i.e. lots of asynchronous pathlength and processing was bypassed in both CMS and CP). part of the reason in worked was that (both CP and) CMS formated CKD disks into standardize format and effectively simulated fixed-block architecture on CKD disks (so all disk CCW operations were very uniform and regular). It cut about 90 percent of the pathlength and overhead out of CMS disk i/o operations.

People at cambridge (primarily bob adair) were very emphatic about not violating POP ... so the cms performance "enhancement" was remapped to a (simulated) diagnose instruction ... under the architecture umbrella that DIAGNOSE is defined as being model dependent ... and an artifical "model" abstraction could be defined for a virtual machine machine model. The semantics stayed pretty much the same ... unform channel program that bypassed loads of generalized CCWTRANS process and that it appeared as a synchronous operation (eliminating the asynchronous processing overhead).

over the years, lots of liberties were taken to enhance and optimize the cms virtual machine environment using the virtual machine machine model diagnose interface.

this is not to say that there weren't things like service diagnostics that were machine model specific that would invoke unique real hardware operations that were embedded in the microcode of a specific machine model (and not defined/specified in the principle of operations).

misc. past postings about CKD disk/dasd
https://www.garlic.com/~lynn/submain.html#dasd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

stranger than fiction

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: stranger than fiction
Newsgroups: alt.folklore.computers
Date: Tue, 25 Jan 2005 13:31:02 -0700
from the annuals of truth stranger than fiction ... i was interviewed before xmas for a possible magazine article about garlic.com/~lynn/ website and its mainframe historical observations (in large part archived postings made to this news group over the years). I just took a call from them now wanting to do a photoshoot for the article. trouble is that other than screen and keyboard, there is very little to take pictures of.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360POO

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360POO
Newsgroups: alt.folklore.computers
Date: Tue, 25 Jan 2005 15:06:10 -0700
Rich Alderson writes:
Because these clunky PDFs are the result of scans to TIFF format which are then bound into electronic codices to resemble the originals (often rare); using OCR to make them into searchable text is a different, far more time- consuming process.

Al Kossow does what he does out of the goodness of his heart, in his spare time--and he has a real job. Thank him for his good works, and stop bitching!


note that the principle of ops was one of the major mainstream documents (other than the cp and cms documents) that was put into cms script softcopy form.

one of the original reasons was that the superset of the principle of operations is the architecture "redbook" ... which has a lot more stuff than what appears in the POP (possibly as much again as the pop subset. they used script conditionals to control whether the redbook would be generated or the principles of operation subset would be generated.

i think you can tell the transition from the traditionally process printed document to the cms script version by the diagram boxes. the cms script version was printed on 1403 printer and didn't provide solid horizontal lines. Later they got solid horizontal lines with the 3800 ... but in the 1403 printer period, the horizontal diagram lines were not solid.

the original cms script was done at the science center with run-off "dot" formating commands. After gml was invented at the science center in 1969 ... cms script was enhanced to also provide processing support for gml tags.

recent posts discussion 360 storage key operation referencing the early 360 POPs (on bitsavers) and more recent pops (from an earlier part of the buffer overrun thread):
https://www.garlic.com/~lynn/2005.html#5 {Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#6 {Lit.] Buffer overruns

part of this was instigated by the AMD reference about provide no-execute support as part of buffer overflow exploit countermeasure (i.e. it doesn't eliminate the buffer overflow ... but it prevents an attacker from utilizing the buffer overflow in performing certain kinds of system exploits)
https://www.garlic.com/~lynn/2005.html#1 {Lit.] Buffer overruns

misc. science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

misc. gml (sgml, html, xml, etc) posts
https://www.garlic.com/~lynn/submain.html#sgml

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: comp.arch
Date: Tue, 25 Jan 2005 15:38:15 -0700
Bernd Paysan writes:
Yes, the 'channel' controllers from the zSeries date back to the /360 or /370 (Lynn Wheeler might know better). The mainframe designs from back then have ideas that the microprocessor folks have forgotten and are now forced to reimplement step by step. Virtualization is one of them (the /360 wasn't, but the /370 was). Those who forget history have to repeat it.

The CDC6600 had a similar concept.


channel programs were in 360 ... and there were channel processors that executed channel programs. channel programs were composed of sequence of channel command words (8 byte instructions) that were executed one at a time by the channel processor. most 360s were microcoded and many of the channel processors were integrated with the same engine running the 360 instructions ... just a different block of microcode.

i/o supervisor would store the address of the start of a channel program in the CAW (channel address word) and issue a start I/O (SIO) to the specific channel number.

the faster 360s (360/65) had separate boxes that executed channel programs (as opposed to channel processing being integrated on the same engine with the instruction processor)

cp67 virtual machine support running on 360/67 (basically 360/65 with virtual memory hardware) has had the problems. channel programs are defined as specifying real addresses. Virtual machine support required intercepting the SIO, analysing the virtual machines I/O sequence, making a copy of it ... and (at least) translating all of the virtual machine specified "address" to the "real" machine address. If the I/O operation involved a transfer that cross a page boundary and the pages weren't contiguous ... then the copied channel program had to also specify multiple non-contiguous addresses. Furthermore the virtual pages had to be pined/fixed in real memory (at their translated address) for the duration of the i/o operation.

this scenario has continued ... for all the virtual memory operating systems. in the initial prototype of the batch, real-memory os/360 operating system to 370 virtual storage ... a copy of CP67's ccw translation was grafted onto the side of the operating system ... since it had to do similar virtual address to real address translation, pin/fixing of virtual pages, etc.

part of the paradigm is that almost all i/o operations tended to always be direct .... the normal paradigm as been direct asynchronous i/o operation (no buffer moves needed), even at the application level (as opposed to operating system creating construct of transfers going on behind the scenes).

the 370/158 was integrated channels. the next generation was 303x. they took a 370/158 and eliminated the 370 microcode ... just leaving the integrated channel migrated and renamed it the 303x channel director. 370/158s and 370/168s were then repackaged as 3031 and 3032 using the 303x channel director (in some sense 3031 was a two processor, smp 158 ... except instead of two processors with both integrated channel microcode and 370 microcode ... one processor just had the channel microcode and the other just had the 370 microcode). The 3033 started out being 370/168 circuit diagram remapped to faster chip technology ... supposedly resulting in 20% faster processor than 168. During the cycle tho, there were additional stuff done ot the 3033 so that it eventually was about 50% faster than 168.

One of the issues in 370 with faster processors was the synchronous hand-shake required by the SIO instruction between the processor engine and outboard channel engine. The other problem was the significant impact on cache hit ratios from asynchronous interrupts. "XA", besides introducing 31bit virtual addressing ... also introduced a new I/O processing paradigm ... where new processing engine could handle much of the asynchronous crud involved with i/o ... and present a much more pleasant queued and low-overhead paradigm to the main processor. The other thing it addresses was real-time I/O redrive. In the 360/370 paradigm ... when there was a queue of requests for a drive, the processor got interrupted, processed the interrupt and eventually got around to redriving the device from the queued requests. The more pleasant queued interface allowed a external real-time engine the capability of redriving queued requests. All targeted at lowering the tight serialization between i/o operations and the instruction engine.

There is now a subset of virtual machine support built into all the mainframe processors called logical partitions. It has most of the characteristics of the original cp67 ... but built into the hardware microcode and supporting a limited number of concurrent virtual machines (partitions). It has simplified the ccw translation process because each partition's memory is contiguous and fixed in storage i.e. simple base & bound rather than fragmented non-contiguous pages. The i/o subsystem can directly use the base&bound values avoiding needing to make an on-the-fly copy of every channel program and modify the storage addresses.

this is sort of high level overview ... a lot of the actual stuff are the nits in the low level details.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Tue, 25 Jan 2005 18:07:52 -0700
Edward Wolfgram writes:
Lynn's example is not relocateable to the level you are thinking- that is, relocateable at an interrupt level. Once execution starts, virtual (not physical) memory addresses must stay put.

The relocatabilty you are talking about can be done, but it would be enormously difficult and a PIA. It would mean that all memory (address) references must be atomic. E.g.


i had a very simple objective that the executable program image on disk would be identical to the executable program image in virtual memory and that the same executable program image could appear (using virtual memory segment sharing) could appear concurrently in multiple different virtual address spaces at potentially different virtual addresses.

each active, executing context of that executable program image (in 360) could have location specific bindings (like addresses in registers) ... but the storage image of the executable program would not.

os/360 conventions peppered the program image with things called relocatable address constants ... which would be swizzled to an address/location specific value when the program was brought into real storage (or mapped into virtual address space) before actual execution began.

tss/360 accomplished the objective by collecting the relocatable address constants (needing any swizzling) into a different structure that could be private to a specific executable instance ... but wasn't part of the instruction program image.

For the stuff i did, i made due with various programming hacks to resolve things relative to content of some register .... and in 360, it is possible to establish the specific register executable instance contents with mechanisms like BALR reg,0.

this is somewhat analogous to different threads executing the same exact code tending to have their private instance execution environment .... even though the exact same common code is being executed ,,,, and extending the private instance execution environment to include instance specific location of the (same) executable code.

in both cp67 and vm370 (360, 370 and later), the virtual memory relocation mechanism allowed placement of virtual storage pages at randomly (and potentially varying) real storage locations ... transparent to the application.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Tue, 25 Jan 2005 19:39:18 -0700
glen herrmannsfeldt writes:
Different objectives have been discussed in this thread, both with and without S/370 style DAT. If one wanted to do timesharing on S/360 including relocating programs as they were swapped in and out, something more complex would be required, possibly including conventions on register use.

It was nice of S/370 to come along when it did to remove the need for that problem.


note the original base/bound stuff that boeing used to do swapping of contiguous memory ... may have been in the microcode of some number of (360/50) machines possibly associated with emulation (1401, 7094, etc) ... as opposed to some custom microprograming that they had done.

it was possible to get custom microprogramming support on many of these machines. I think it was Lincoln Labs may have originated the search-list (SLT) instruction that was installed on some number of the 360/67s.

the boston programming center in support of the conversational programming system (CPS) ... interactive PLI and BASIC that ran under relative vanilla os/360 ... had done a custom RPQ microcode package that could be had for 360/50 ... that was performance acceleration for doing interpretive execution. I don't know the details of how CPS handled swap-in/swap-out ... it is possible that they controlled a lot more of program execution ... they could control the instruction image not having embedded location specific information and knew what registers in the private execution context needing swizzling when swap-out/swap-in sequence didn't bring back to the same swap address.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

M.I.T. SNA study team

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: M.I.T. SNA study team
Newsgroups: alt.folklore.computers
Date: Wed, 26 Jan 2005 09:30:35 -0700
i got a new toy that i'm just learning to use (especially trying to cleanup the OCR bit) ... an epson 2480 flatbed scanner

note that the internal network was not SNA (or not SNA well into the 80s)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

misc. past comments related to SNA, SAA, 3tier, etc
https://www.garlic.com/~lynn/subnetwork.html#3tier

There has been some folklore that the PU4/PU5 (sscp/ncp, 3705/vtam) interface complexity was an outgrowth of the failure of fs
https://www.garlic.com/~lynn/submain.html#futuresys

which supposedly had a major objective of addressing the plug compatible controller competition. there was some write-up that blames a project that I worked on as an undergraduate for spawning the plug compatible controller business (i.e. reverse engineered the 360 channel interface and built a channel interface card for an Interdata/3 and the Interdata/3 programmed to emulate a mainframe telecommunication controller):
https://www.garlic.com/~lynn/submain.html#360pcm

the paper is copy made on an internal corporate copier ... it has one of the copier IDs on the corner of each page. recent post that mentions the incident that lead to retrofitting corporate copiers with IDs that showed up on all copied pages:
https://www.garlic.com/~lynn/2005b.html#12


Date: February, 1979

From; M.I.T. SNA study team
c/o J. H. Saltzer
IBM Corporation
Cambridge Scientific Center

Tel:  (617) 253-6016
VNET: JHS at CAMBRIDGE

To:   A. J. Krowe, 620/HAR/3H-29
B. O. Evans, 6C8/VAL/TD-19

This report is an evaluation of the IBM System Network
Architecture, SNA. The evaluation was performed by a team of
three members of the Computer Systems Research Division of the
M.I.T. Laboratory for Computer Science, acting individually as
consultants to IBM. The study focused on the architecture more
than the implementation, and it considered SNA through release
four.

Perspective (underlined)

The three team members are the following:

Jerome H. Saltzer
Professor of Computer Science and Engineering
M.I.T. Laboratory for Computer Scinece

David D. Clark
Research Associate
M.I.T. Laboratory for Computer Science

David P. Reed
Assistant Professor of Computer Science
M.I.T. Laboratory for Computer Science

The team members are of the academic computer science community,
all with pragmatic interests and experience. Members have
individually designed or in concert with others helped design
time-sharing terminal concentraion networks, the communications
and network software for the Multics system, low and high-level
protocols of the ARPA computer network, and both the hardware and
the protocols of a local computer communication network. Higher
level network protocols, "distributed systems," and the
interactions among software systems, data communication systems,
and computer architecture are research specialties of the team
members. The comments in this report are mostly from a practical
rather than a theoretical perspective, but with a strong interest

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: alt.folklore.computers
Date: Wed, 26 Jan 2005 12:02:13 -0700
pc writes:
i knew a mainframe ops mgr who went to an IBM course for something new, maybe it was DOS/VS. he came back shaking his head about one of the other attendees who was from a Burroughs shop. seems the Burroughs man was slowing the class down during the explanations of new Sysgen options. he just didn't get it, especially the steps needed to add additional devices like disk drives. according to him, that work shouldn't be needed, his CE just rolled the boxes through the door and plugged them in.

when i was an undergraduate, one of the changes i made to cp67 (besides adding a whole slew of performance enhancements) was to add tty/ascii support.

standard os/360 & dos was everything (including specifying all i/o devices) needed to be defined in sysgen ... although they tried to alleviate that later in 370 timeframe by introducing "E4" extended sense channel command code ... which would ask the device what it was (only new hardware got the "E4", so there still have to be sysgen for the pre-existing 360 timeframe devices that might still be in use).

cp67 for at least the terminal support had tried to automatically detect whether it was talking to a 1052 or a 2741 one a line (and what kind of 2741). the standard telecommunications control unit, 2702 had "SAD" comments that could dynamically associate a line-scanner with a specific line ... i.e. the standard cp67 support set the line-scanner to a specific type and tried some commands (that were known to fail on the wrong device) and then reset the line-scanner and tried some more commands.

so I thot I would be devilish clever and do similar scenario when adding the support for tty/ascii. I got it all tested and it appeared to work fine ... until the ibm field engineer informed me that I was just being lucky during my testing. part of the objective was to be able to have a common dail-up pool with single phone number for all terminals. my testing had so far been limited to fixed connected lines ... but i hadn't yet tried all three kinds of terminal dialing into the same phone number. The problem was that they took a short-cut in the 2702, while they provided the ability to associate any available line-scanner with any available line ... they had took a short-cut and hard-wired the oscillators to specific lines ... aka you would have problems trying to connect a 2741 (134.x baud) on a line with a oscillator hard-wired for tty (110 baud). The code would correctly recognize any terminal on any (fixed) line w/o sysgen specification ... but the hardware limitation wouldn't allow dynamic line speed operation.

So that was one of the things tha prompted the university to build our own plug-compatible telecommunication controller ... the line-scanner was implemented in software on the interdata/3 and would strobe the raise/lower of the incoming signal to dynamically determine the line-speed. Originally a single interdata/3 handled both the mainframe channel interface as well as all the line-scanner/port interfaces. This was later expanded to a cluster of machines with a Interdata/4 dedicated to handling the mainframe channel interfaces and multiple interdata/3s handling line-scanner duty.

misc. past 360 plug compatible controller posts:
https://www.garlic.com/~lynn/submain.html#360pcm

and to not stray too far from the buffer overrun problem ... this is a rare example i know of in cp67 that exhibited the problem. because of tty/ascii spec, i had used one-byte arithmetic in calculating lengths. The way I remember the event, I think it was called the MIT Urban Lab ... in tech sq ... but a building across the courtyard from 545 (565? ... harvard trust was on street side of the first floor) ... CP67 was modified to handle some sort of graphics/plotter(?) ascii device at harvard(?) that had line limit more on the order of 1200 characters. The max. length fields were correctly modified ... but the use of one-byte arithmetic wasn't changed ... and so there was 27 cp67 system failures one day. tale of the tragedy appears here:
https://www.multicians.org/thvv/360-67.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 26 Jan 2005 14:18:52 -0700
CuriousCat writes:
Just of curiousity.

Strings can be represented using either of the following.

1) Sentinel method, use a end-of-string marker The length of a string is implicit.

2) Value-Len method, bookkeep both value and its len The length is explicit.

Questions:

If the length of the string is *not known* beforehand, then there is no difference between the two and the latter representation doesn't have any additional benefits. Then, the API which C currently has and the above API would be equivalent. Am I drawing an incorrect conclusion?

Why do most PLs use the former representation and not the latter?


it was fairly common in the 60s to have len+buffer implementations where the len was a two byte header ... i have some recollection of a long ago and far away discussion about the nul-termination being chosen because it would save a byte (compared to a two byte header) ... as well as saving a register (and maybe a machine instruction) in some loop operation implementations ... aka current pointer in a register plus length (or calculated end, i.e. start+length) in another register ... where the nul-termination just required current pointer and relied on finding the zero byte.

this was in mostly by-gone era with environments of extremely small memories with extremely few registers ... and lots of extensive manual optimizations on relatively small amounts of code (typically measured in k-LOCs or possibly 10k-LOCs; hardly applies these days to anything ... or at least anything that might have any portion of a tcp/ip stack associated with it).

the 60 era stuff frequently tended to have maximum length and current length (for dynamic structures that might have various inputs assigned, which then becomes a 4byte administrative overhead) ... any operation that obtained new input would always specify a maximum acceptable length (for all operations) either explicitly (or implicitly because it was always part of the buffer structure) and the input operations would always return either the actual length input ... or the residual length (maximum minus actual) ... and the buffer header would be updated with the actual.

I believe the majority of PLs actually use the later ... however, it frequentlys tends to be both a system infrastructure issue coupled with a language infrastructure issue (i.e. not solely a matter of just the PL). A PL that in most systems and infrastructures tended to use the later method ... might be forced to use the former method on a platform that otherwise heavily relied on the former method extensively.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Noobie SSL certificate problem

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Noobie SSL certificate problem
Newsgroups: sci.crypt
Date: Wed, 26 Jan 2005 14:56:27 -0700
bill.billtaylor writes:
Sorry if this is in the wrong forum. (if not please point me to the correct place)

Recently I request an PKI certificate from our client. I received the certificate but in the mean time delete the request (.arm file).

Is there a way for me to recreate this request from the certificate that I received (extract the original request)? Or do I need to start over?


in normal public/private asymmetric environment ... you generate a public/private key pair. you then distribute the public key via various mechanisms (while keeping the private key, private and secure). It is possible for you to digitally sign something with your private key ... and the receivers can validate that digital signature (and therefor that the material originated from you) with your distributed public key. Also, others can asymmetricly encrypt something with your public key and only you (with your private key) can decrypt it.

the normal purpose of a PKI certificate is

1) that a trusted third party can attest that the supplied public key really belongs to you ... this is applicable to situations where the relying/recieving parties have no possible direct knowledge of you ... but do have some trust or other relationship with the third party (generating and distributing the PKI certificates).

2) the trusted 3rd party has no idea who you will be dealing with ... other than it presumably is somebody that already trusts the 3rd party ... so the 3rd party returns the information directly to you so that you can append it to any future correspondence you might have with unknown relying parties.

this is the letter-of-credit model from the sailing ship days ... as opposed to existing real-time environments where the relying party contacts the trusted 3rd party (say a bank or credit bureau) in real time). The certificate model was created before the days of ubiquitous electronic communication.

the typical process is for you to generate some form that includes your public key as part of the contents and then you digitally sign that form with your private key ... and then send off the form and the digital signature ... as well as a bunch of other validation information to the 3rd party PKI certificate issuing entity. The 3rd party PKI certificate issuing entity uses the public key (included in the form) to validate the transmitted digital signature. Then they do something that validates the information that you are who you claim to be ... and that the public key is really yours. From that they then can generate a PKI certificate. The PKI certificate will contain your public key and some sort of identifying information about you.

the original request should have been created with your public key (which should also be in the body of the returned PKI certificate) and a digital signature generated by the private key ... so hopefully you still have a copy of the original private key laying around somewhere.

misc. stuff and past posts about SSL certificates:
https://www.garlic.com/~lynn/subpubkey.html#sslcert

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 27 Jan 2005 09:22:23 -0700
CuriousCat writes:
So would it be correct to say that apart from the size/speed issues in choosing one representation over the other, they suffer from similar problems from a security point of view? I am thinking (like yourself) yes, because they appear equivalent from that standpoint.

The only advantage that a value-length representation can offer that I can think of is the separation offered between the "value" and "len". So knowing the length before reading in the value helps. This, unfortunately, is not possible in the sentinel representation since they are tied strongly together.


there are some additional issues ... one is that the nul-termination strings requires data to know its length ... and it doesn't give the length of an empty buffer.

the len+pointer paradigm was used not only for existing string length but also for max buffer size. a large number of buffer overflows occur when copying a string into a target area where there is no information about the length of the target area ... where it become the responsibility of the programmer to manage the maximum length of the target buffer. the more things that a programmer has to directly manager ... the more opportunities for mistakes.

many of the PLs that used pointer+len constructs ... didn't simply use them for length of areas that contained data ... but also used them for areas (buffers) that didn't currently contain data and/or might contain some data ... but the buffer was actually larger than the data it contained.

the contention is that the larger number of buffer overflows in the c programming environment is the semantics of copying stuff into buffers having ambiguous length ... and it was up to the programmer to provide the administrative function of keeping track of such lengths ... and was also the source of the mistakes leading to the buffer overflows.

the infrastructures using pointer+len used them not only for current lengths but also about potential or maximum lengths. the operation and semantics of copy operations in such environments would honor the max/potential length values of the target locations ... eliminating one possible kind of programmer mistake (misspecifying and/or not specifying at all the length of the target location).

the issue with the semantics of the nul-terminated paradigm ... wasn't so much that it couldn't be used for deliminating the lengths of data that existed ... but it wasn't used very well for deliminating the lengths of things that didn't contain data (maximum length of target buffers).

a hypothetical (alternative) paradigm (to pointer+len) might involve defining buffer areas not containing data as being nul-filled ... and such areas were then terminated by a non-nul ... but that gets into other potential structural issues.

however, i would contend that the use of pointer+len paradigm handles both data area lengths and non-data area (i.e. max. length of target buffers not currently occupied with data) lengths in a much more consistent and uniform paradigm.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers,comp.security.unix
Date: Thu, 27 Jan 2005 09:31:55 -0700
"D. J. Bernstein" writes:
that don't check for errors. A very-high-level caller uses ferror() to check for errors after fflush() and before fsync()+close()+rename(). The code is---thanks to stdio's wise decision to record errors in an easily accessed error variable---simpler than C code that passes write errors up the call chain.

As I said before, this failure-recording strategy compensates somewhat for the lack of an exception mechanism in C. I'm not saying it's a perfect substitute---often one still has to pass _read_ errors up the call chain---but it's far better than nothing.


note that the buffer overflow exploits that i quoted from the cve database ... and the numbers that linux magazine quoted from nist buffer overflow exploits ... weren't about the percentage or kind of buffer overflow failures ... they were the percent of exploits where an attacker could successfully attack a system using a buffer overflow vulnerability.

also ... the AMD chip hardware feature previously mentioned (along with windows XP support) wasn't introduced to prevent buffer overflows it was introduced to prevent successful exploits using buffer overflow vulnerabilities.

one might conclude that if specialized hardware was being introduced to prevent attackers exploiting buffer overflow vulnerabilities ... that there is some perception that exploits of buffer overflow vulnerabilities are fairly significant occurance (and therefor that buffer overflow vulnerabilities themselves are relatively prevalent).

the hardware isn't even targeted at buffer overflow vulnerability elimination ... it is buffer overflow vulnerability exploit elimination.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: alt.folklore.computers
Date: Thu, 27 Jan 2005 09:48:18 -0700
jmfbahciv writes:
He was right. It took gamers to get this functionality back into computing.

part of the mainframe problem was once the "sysgen" process was established ... it was difficult to change its course ... even with the introduction of device "E4" extended sense (although there was still a lot of 360 devices that continued to exist for quite some time).

there were other silly things that tended perpetuate the sysgen process that also took awhile to change. the os/360 device control block was addressed with 16bit value ... so initially they all had to be in the first 64k of real memory ... that may have gotten change so that they had to be in some contiguous 64k of real memory (doing base+offset rather than direct address) ... and then systems started having more devices than could fit in 64k of contiguous memory.

i got to play in the disk engineering lab. when i started, they were running all the testing stand-alone .... big mainframe configuration switch that allowed all the engineering disks to be disconnected from any connectivity except the one being tested. the problem (at the time) was that running a single engineering disk connected to MVS operating system, MVS had a MTBF of 15 minutes.

i got to rewrite the i/o subsystem so that multiple concurrent engineering disks could be tested concurrently in an operating system environment. random past posts about eliminating all possible failure modes in a highly hostile i/o environment:
https://www.garlic.com/~lynn/subtopic.html#disk

part of it was that an engineering disk might generate more error conditions in 15 minute period than a whole disk farm might generate in years of operation. eliminating failure modes was then not a hypothetical exercise but having to deal with being constantly being bombarded in an extremely hostile operational environment.

i got to somewhat play with generalized device recognition and the E4 sense ... but started out needing dummy, predefined device control block to assign the device to.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Thu, 27 Jan 2005 09:53:15 -0700
Joe Morris writes:
And this discussion dredges up memories of the one type of OS/360 executable that was relocatable on the fly: the transient (Type 3/4) SVC. Typical systems had multiple transient areas, and the requirements for a type 3/4 SVC (there wasn't really any difference between the two types) included a prohibition against any relocatable constants, a requirement of absolute reentrancy, and a fixed (R12?) base register.

A transient SVC might start execution in one transient area, then surrender control (for example, to do I/O) and resume execution in a different area. The transient SVC handler was responsible for setting the base register to the appropriate value each time a transient SVC was dispatched.


what i remember from mft/mvt days was that there were (initially?) two, 2k transient svc areas. part of the issue was that the loader wasn't involved in reading code into the transient area ... so there wasn't any ability to perform swizzling operation on any (relocatable) address constants.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 27 Jan 2005 11:26:03 -0700
Anne & Lynn Wheeler writes:
however, i would contend that the use of pointer+len paradigm handles both data area lengths and non-data area (i.e. max. length of target buffers not currently occupied with data) lengths in a much more consistent and uniform paradigm.

aka the issue of nul-termination with respect to overrun problems wasn't so much with respect to the areas containing data where nul-termination provided the length ... it has been much more applicable to areas that didn't contain data and/or areas larger than the data contained.

frequently vulnerabilities, exploits and failures are not where things are covered .... but where things aren't covered. the nul-termination convention of providing length information for areas containing (at least non-nul) data ... but has served less well in providing length information for areas not contining data ... and/or areas that may be larger than the data contained.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relocating application architecture and compiler support

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relocating application architecture and compiler support
Newsgroups: comp.arch,comp.arch.embedded,alt.folklore.computers
Date: Thu, 27 Jan 2005 12:07:32 -0700
Julian Thomas writes:
The /50 emulated only the 1410/7010 and the 7070/74. No 709x (with a 4 byte wide memory, performance projections were dismal) and no 1401 (left to the /30 and /40).

There was no microcode assist for memory swapping in these emulators. There might have been (I don't know one way or the other) in a microcode RPQ that was done for a timesharing company (AB).


this is really vague recollection .. there was something about partitioning some memory (using base&bound specification) and invoking emulation using diagnose instruction ... and the gimmick was how to specify base&bound (with diagnose) while still staying in 360 mode (aka it wasn't originally intended for swapping or time-sharing ... I have vague recollection it had something to do with emulation).

there is a small off-chance that i have a page or two write-up in some pile of paper someplace. if i get this scanning stuff down ... i'll try and find it.

on the other hand ... microcode RPQs weren't that uncommon.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 27 Jan 2005 13:15:20 -0700
Walter Bushell writes:
Shouldn't there be hardware protection against this (and the other one) that is an execute only flag, that would protect code from being written over or modifying itself? This also allows code to be re-entrant.

Without hardware protection there is buffer except in the mind of the writer, and perhaps in the mind of the maintainer, unless the language forces the concept.


the amd hardware and windows support is the inverse. there have been execute only support around for some time ... but that didn't necessarily prevent other things from being executed. the inverse is flagging areas as no-execute (i.e. i-fetch won't occur as opposed to only i-fetch works).

some of this conceptually may have come from risc and harvard architects .... some machines have separate, non-coherent, store-into i&d caches (at least d-cache is store-into, as opposed to store-thru, i-cache doesn't take stores) ... and when loaders were doing their thing ... and laying out instructions for execution ... the "data" was showing up in the d-cache ... but wouldn't necessarily have hit memory yet (which met that the i-cache fetches wouldn't see stuff that the loader had recently completed).

the loader when it had finished whatever it was supposed to be doing would then issue a flush d-cache (to memory) ... so the correct stuff would appear in memory when the i-cache went to fetch it.

so the amd hardware and windows support was for sort of the inverse of execute-only areas ... it was for never-execute areas (it isn't a real inverse because execute-only and never-execute don't necessarily preclude areas that could be both/either execute and data).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 27 Jan 2005 15:13:22 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
My 50-150% estimate was for legacy code. There are reasons to suspect that ABC might be much cheaper for well-structured code like the sort you are talking about. In the good implementations of ABC, the compiler inserts bounds checks everywhere, but then optimizes. For instance, it tries to see if any bounds checks are provably redundant, and if so, it omits them. It is hard to do this proof interprocedurally, but probably quite feasible to do a good job if it is locally obvious that the check is redundant. For this reason, I would think that ABC might well have little performance overhead when applied to your kind of code.

I agree that the only way to know for sure is to try it and see. I would like to think that a careful programmer would not reject a quality-enhancing tool for fear of performance overhead without first measuring and benchmarking its performance cost. Pre-mature optimization, and all that.


i frequently assert that taking straight-line well-tested application code and turning it into a sevice operation takes 4-10 times the original code and possibly ten times the effort.

we once did a one week jad with taligent on what would be involved in taking their existing infrastructure and frameworks and turning into something that could be used for business critial applications (including objective of possibly significantly reducing the service quality programming that needed to be generated for each individual application).

after crawling thru all the pieces, it was estimated to be a 30 percent code hit to their existing frameworks/code ... plus three new frameworks; aka it wasn't what was necessary in programming business critical applications .... it is what should be done to the infrastructure as a sound basis for implemeintng business critical applications.

minor reference to the original payment gateway for something that is now commoningly called e-commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: alt.folklore.computers
Date: Fri, 28 Jan 2005 10:15:33 -0700
jmfbahciv writes:
Yup. That's what our COMMON.MAC module did. Based on questions answered at MONGEN time, COMMON would take them as arguments to a lot of gagawful macros.

basically that was stage1 sysgen. you punched up the parameters of your installation (including device specification) on maybe 100 cards ... which were macro statements that were run thru the assembler. The assembler expansion of those macros punched about 2000 cards referred to as stage2 sysgen. this was job control and other stuff to actually build your customized system .... including assembling the i/o configuration specification ... which was included then in build of the kernel. most of the rest of stage2 was copying all the files needed for the configuration you had specified (like which compilers).

on 360/30 ... os/360 release 6 or so, stage1 might take 30-60 minutes ... and then stage2 could take 8hrs or so. This was all done "stand-alone" using a special system called a "starter" system (which had a little bit of magic code that didn't need i/o sysgen, aka the same starter system was shipped to all customers, and was booted on all machines ... for building the customers real system).

i got interested in the process and took some stage2 punch card output and had them interpreted (had the punch codes actually printed on the top line) and had a good look at it.

by os/360 release 11, the university had a 360/67 which was mostly run in 360/65 mode for batch work and it was my responsibility to build and support the system (when they brought in cp/67 in jan. '68, i also got responsibility for that ... i never had any responsibility for tss/360 ... other than watching some of the ibm'ers play with it on weekends).

One of the os/360 issues was that running batch work tended to be very disk intensive ... especially student jobs which tended to be almost all job control and program loading of the fortran compiler and almost immediate failure of one sort another if by some chance it compiled clean and actually went to completion.

fortran student jobs on the 709 were done with tape-to-tape fortran monitor that took a second or two per student job. initially on 360/65 the job scheduling process and various program loading overhead was driving student jobs was well over half a minute per.

after some more analysis ... i decided that i could significantly improve the thruput of the workload by optimizing the disk arm thrashing ... and to optimize the disk arm thrashing, i had to carefully order the files on disk, and to carefully order the files on disk ... i had to control the order that they were placed on disk. That drove me back to stage2 sysgen process. First I figured out how to run the sysgen process on a currently running production system (so that i didn't really need stand-alone, dedicated time). I then tore apart a stage2 sysgen deck and carefully reodered all the statements so that there was a careful ordering of the statements that moved files onto the new system disks.

the result was such a freshly built system ran the student job workload three times faster ... a little over 12secs per student job rather than well over 30secs (still slower than the 709)

In the spring and summer of '68 I also rewrote a bunch of cp/67 kernel pathlength code ... and got to do a presentation on the cp/67 kernel pathlength changes and the mft14 reorganization and optimization at the fall 68 share meeting in Atlantic City.

after the presentation, some senior vp of dataprocessing at amex asked me to lunch across the street at the playboy club.

random past posts on the subject.
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#174 S/360 history

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 28 Jan 2005 11:49:48 -0700
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
I suppose that I should repeat the citation that I gave in the post that initiated this (unexpectedly huge monster) thread:

J. Pincus, B. Baker, Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns. IEEE Security & Privacy, July/August 2004, p.20-27.

This journal should be present in any respectable large public library. I sincerely solicit all participants to take a look of that paper.


whole new industry appears to be springing up ... just saw new 450+pg book in the book store (didn't bother to buy it) buffer overflow attacks".

and just this moment quick query on amazon.com for buffer overflow attacks" ... lists three books (the one i just saw at the book store listed as being published 12/1/2004).

... note there is some distinction between software failures because of buffer overflow mistakes ... and being able to exploit/compromise a system via use of various buffer overflow vulnerabilities

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 28 Jan 2005 14:53:23 -0700
"Trevor L. Jackson, III" writes:
In essence you said that the easy checks were not worth doing because by dint of great effort on the part of highly skilled practitioners a superior result can be reached.

I suppose that the apocryphal workman who could cut piece of plaster to exactly patch a hole (allegory intentional) would feel the same way about a cabinetmaker using a measuring tape, who would feel the same way about a carpenter marking a 2x4 with a blunt pencil, who would feel the same way about a woodcutter marking trees with a can of spray paint.

It is basically an arrogant attitude with no possible practical meaning.

Note that in theory there is no difference between theory and practice, but in practice there is a distinct difference. What you are propounding is theory that contravenes all of the best practices.


actually feathered wallboard with tape and putty ... which then had to be sanded, smooth covered up a lot of dings. but then even that became too expensive for many implementations (tape, putty and sanding on cielings can be a really hard job) ... and they invented the gumball (or some call it spitball) ceiling covering ... basically really rough spay-on that covered up almost anything. way back in another life i could sink 16penny common with 20oz doing rough framing with a tap & single pound (light tap was to get it far enuf in so i could get my fingers out of the way). i never got so i could do 8penny box (lot of plywood sheeting over studs) with single pound ... holding the nail until the hammer had contacted the nail head and then getting fingers out of the way before the hammer had slammed the nail home.

now they have those fancy nail guns.

the buffer problem is currently way past the failure mode associated with buffer overflows and well into pervasive exploits and attacks taking advantage of the multitude of buffer overflows (aka not about buffer programming mistakes but about being able to mount successful attacks and exploits because of the prevasiveness of buffere overflows) ...

feb. 2005 linux magazine bufferr overflow attacks ref:
https://www.garlic.com/~lynn/2005b.html#20 [Lit.} buffer overruns

recent books on buffer overflow attacks ref:
https://www.garlic.com/~lynn/2005b.html#42 [Lit.} buffer overruns

enhanced hardware and operating system support for buffer overflow attacks countermeasures (not preventing buffer overflows, attempting to prevent successful attacks that try and take advantage of buffer overflows):
https://www.garlic.com/~lynn/2005.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#32 8086 memory space [was: The Soul of Barb's New Machine]
https://www.garlic.com/~lynn/2005b.html#5 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2005b.html#34 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#39 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Fri, 28 Jan 2005 15:04:41 -0700
Morten Reistad writes:
1977 is pretty early for such user groups, and give a window onto another era.

i got to go to the '68 spring share meeting in houston where they announced cp/67 and it may also have been the meeting where they started talking about decommiting tss/360 (i nearly got into altercation at scids with some tss360 developers, claiming some stuff about much better cp67 performance).

lots of people then could talk about share meetings from the 50s.

I got invited to san fran share 99 meeting for the 30th anv of the announcement of vm/370 ... you can find a picture if you go to
https://www.garlic.com/~lynn/

main share meetings have been held twice a year (there have also been two interims held a year ... usually referred to as xx.5) ... so share 99 is just short of fifty years.

i got to give a talk on history of vm performance at the oct. '86 european share meeting on the isle of gernsey(?). They only allocated me an hour. The talk was resumed at scids about 6pm and continued until after midnight (with some libation going on during the talk).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 29 Jan 2005 09:56:53 -0700
infobahn writes:
My first programming language was BASIC (on an IBM mainframe running an OS called MUSIC). When the mainframe powers-that-be switched to VM-CMS, I picked up a bit of IBM BASIC as well as VS BASIC. I also learned EXEC, which taught me how to draw ampersands. :-) I spent quite a long time messing around with EXEC, since it was surprisingly powerful (and I won't go into details here, *cough*). My next languages were 68000 assembly language and Pascal. Mercifully, I have forgotten both entirely. I came to C comparatively late, about 15 years ago, and Linux even later, about 5 or 6 years ago. I've had to mess around with data on minicomputers, mainframes, micros, STBs, and even - heaven help us - a Pick system. I've munged that data, for better or for worse, using COBOL, C, C++, Perl, Python, Databasic, dBase, Visual Basic, PHP, and even straight binary on occasion. That list may be incomplete. (I don't claim expertise in all those languages, by the way. Sometimes I pick up a language to do a single task, and then put it thankfully back in the Oblivion Closet. Python being a definite case in point.)

one of the issues is that both the systems environment and the programming language can be influenced by the operational environment.

most of vm/cms (and its predecessor cp67/cms) was written in assembler, however it was heavily influenced by hostile, adversarial operational environment. the cp67 timesharing sysetm deployed at the cambridge science center, 545tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

provided time-sharing services to people in corporate hdqtrs handling the most sensitive of corporate data as well as online access to various studnets from BU, Harvard, MIT, etc. around the boston area (i had previously mention that there were no known security type breaches ... there were a couple of denial of service attacks that were quickly dealt with).

cp67/cms (and later vm/cms) was also used at basis for many early time-sharing services ... NCSS, IDC, Tymshare, etc. ... where you could assume that there were spectrum of different corporate clients that could easily be in competition. There was somewhat a distinction between cp67 that went on the 4th flr and multics that went on the 5th flr ... the range of open, commercial timesharing services that were deployed using the 4th flr system ... as well as some number of the other types of deployments.

Note, there were three command processors done on CMS, the original CMS exec from the mid-60s, Cris's EXEC2 from the early '70s, and Mike's REX from the late '70s (although released as REXX in the early '80s).

part of the issue was being able to respond to adversarial and hostile attacks ... like you current see in the frequent buffer overflow attacks ... recent ref:
https://www.garlic.com/~lynn/2005b.html#43 [Lit.] Buffer overruns

and creating defense-in-depth as countermeasures. slightly related was recent post in this thread that only ran in a.f.c
https://www.garlic.com/~lynn/2005b.html#35

about the hostile i/o environment of the disk engineering lab
https://www.garlic.com/~lynn/subtopic.html#disk

and redoing the i/o subsystem so that it never failed. The problem was that normal mvs mainframe system had something like 15 minute MTBF when dealing with single disk testcell (although it normally ran fine with whole disk farms for extended periods). The problem was that the disk testcell environment was extremely hostile, a single disk testcell generating more (and never before seen) errors in 15 minutes than whole disk farms might generate in years.

One assertion is that C and many of the C-based platforms and applications had quite a bit of evolution in environments that were rarely hostile. In some cases, the platforms and applications had a design point of an isolated and relatively congenial operational environment ... not being exposed to things like the buffer overflow attacks that are currently being documented in papers, magazine articles, and books (and motivation for specialized hardware and operating system support to contain the effects of the exploits and attacks).

One possible indication is if situation goes beyond simple buffer overflow failures and there appears to be a whole sub-culter involved in exploits and attacks specifically related to buffer overflow vulnerabilities (with the evidence of papers, articles, books on the subject of buffer overflow attacks, as well as specialized hardware and operating system support) ... then the situation has gone beyond simply training better programmers.

Going back to the analogies in earlier posts ... when people start complaining about the alarming number of traffic fatalities because of people crossing the center line ... on old 101, on 17 going over santa cruz mountains ... or even more recently crossing the meridian on 85 ... you are facing more than simply better driver's training and traffic enforcement ... you are talking about needing some serious concrete traffic barriers.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 29 Jan 2005 10:02:08 -0700
"Douglas A. Gwyn" writes:
No, the length can be determined by examining the string. The length of an empty string is zero; it is represented by a single zero-valued byte. (Just as with value-len an empty string is represented by len=0.)

no, i'm talking about

1) an area of storage that happens to contain a pattern of data ... and being able to determine the length of that area of storage ... using a paradigm based on data pattern.

2) and an area of storage that doesn't contain any data pattern (possibly an empty buffer) and determining the length of that storage using a paradigm based on data patters ... where there is no data pattern defined for the area of storage.

while it may be possible to define a data-pattern based length metaphor for areas of storage with defined data patterns ... it is much harder to use a data-pattern based length metaphor for area of storage w/o defined data patterns.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2005 10:14:49 -0700
dutch writes:
I'm sure one reason for those 360s hanging around so long when faster/better/cheaper alternatives were available was a result of the same mind-set. A perfect example of how rulesets crafted to encourage thriftiness and efficiency can actually produce the opposite. The problem is the decision-makers are ignorant of the environment. It makes sense to hold onto stable technology like tractors or washing machines for 10 years. Clinging to obsolete computer technology, especially expensive mainframes, is stupid financially and otherwise.

Another, hidden reason for the longevity of the 360s is that many of them were leased from leasing companies who took advantage of IBM's high rental rates to offer IBM hardware for less per month than IBM charged for the same thing. In order to make their money back they had to extend the life of each 360 out to 7 years, versus the 5 that IBM used to set their monthly rate. It was rare for an end user to purchase a machine back then, nearly all these systems were rented at six-figure-per-month rates for most of them.

That's also one of IBM's major motivations for quickly replacing 360s with 370s -- it screwed the leasing companies big time. IBM hated the leasing companies -- they competed against Big Blue using IBM's own products.


Amdahl gave a talk at mit in the early 70s and was asked a number of questions about the justification he used with VCs for funding his new company to make mainframe clones. he had some comment about that there was something like $100billion already invested on ibm mainframe software applications and that even if ibm walked away from 360/370 tomorrow ... that there would be enuf software applications continued to exist to keep his company in business until the year 2000.

so with a little hind-sight ... one might conjecture with regard to comment about ibm walking away from 360/370 was in reference to the future system project:
https://www.garlic.com/~lynn/submain.html#futuresys

which was going to completely replace 360/370 with something that was more radical and different than 360/370 than the introduction of 360. possibly, in fact, Amdahl motivation for forming a 360 clone company possibly was some disagreement over the future system strategy (i.e. future system strategy may be considered one of the main motivations for the formation of the mainframe clone business).

now there have been some references that the motivation for the future system strategy was the clone controller business (plug compatible controller, pcm) ... a couple specific refs:
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003p.html#25 Mainframe Training

... which in the past, i've gotten blame for helping spawn:
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 29 Jan 2005 11:01:37 -0700
Anne & Lynn Wheeler writes:
while it may be possible to define a data-pattern based length metaphor for areas of storage with defined data patterns ... it is much harder to use a data-pattern based length metaphor for area of storage w/o defined data patterns.

furthermore, i would contend that having a length metaphor for areas of storage with defined data-patterns is equally as important as having length metaphor for areas of storage will undefined or ill-defined data patterns.

if one can contend that it is perfectly satisfactory to have the programmer be responsible for managing the length of arbitrary storage locations with undefined data patterns (like buffers) then why shouldn't it also be their responsibility for manually managing the length of arbitrary storage locations with defined patterns (strings).

Conversely, I would assert that it is at least as useful for the programming environment to provide a paradigm for managing length of storage areas with undefined or ill-defined data patterns as it is for providing a paradigm for managing lengths of storage with defined data patterns (strings).

the only other observation is that the data-pattern based length paradigm (nul-termination) can be applied to areas of storage with defined data patterns ... it is less well suited to applying a data-pattern based length paradigm to both areas of storage with defined data patterns and undefined or ill-defined data patterns.

on the other hand, the same pointer+length based length paradigm can be applied to both areas of storage with defined data patterns (strings) as well as areas of storage with undefined or ill-defined data patterns (buffers), ... aka it might be useful to choose a length methophor ... which can be equally applied to all types of storage area regardless of the type of data patterns those storage areas might contain.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2005 11:21:10 -0700
Joe Morris writes:
In this time frame, lots of the OS/VS2 shops would have been running SVS, not MVS. Admittedly, SVS ("Single Virtual Storage") was essentially MVT with virtual memory grafted on and the product was intended as a bridge for users moving into MVS, but MVS was still a big step. (Perhaps Lynn or others can provide a timeline for SVS vs. MVS.)

i have some recollection of being in pok machine room (705/706?) 3rd shift and working around ludlow(?) on a 360/67 ... he was building the aos2 (prototype for os/vs2 ... svs) ... basically taking MVT and with a little stub code to set-up virtual memory tables and handle page-faults ... and wiring CCWTRANS (taken from CP67) onto the side of MVT IOS to handle all the channel program translation.

the detailed mvs history page URL that I had recently went 403.
http://os390-mvs.hypermart.net/mvshist.htm

but this appears to be the same/similar
http://www.os390-mvs.freesurf.fr/mvshist.htm
http://www.os390-mvs.freesurf.fr/mvshist2.htm

some number of notes about the above

... it refers to os/vs1 having single virtual memory and os/vs2 having multiple virtual memories.

os/vs1 was essentially os/360 mft with single virtual memory crafted on the side and os/vs2 was initially "SVS" ... os/360 mvt with single virtual memory crafted on the side.

OS/VS2 initial release was called SVS for single virtual storage and was later enhanced to MVS for multiple virtual storage.

OS/VS2 release 3 ... i believe was the first MVS ...

starting with unbundling, june 23rd, 1969 ... application software started being charged for ... but the operating system continued to be "bundled" (free) with the hardware.

with the appearance of clone mainframes there was push to start charging for operating system.

I got to be the original guinea pig for this with the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

and spend six months with the business and planning people working out the guidelines for charging for operating system stuff. The revised rules was that if the software was needed for direct hardware support (aka device drivers, etc), then it was still bundled ... but all other operating stuff (like performance management) or the resource manager could be priced.

This led to a problem when they went to put-out SMP support the next release. I had all sorts of guodies in the resource manager software ... including a bunch of stuff that SMP was dependent on. The business rules had SMP support being "free" ... since it was directly needed for hardware support ... however it wouldn't be quite following the rules if there was free software that had a pre-requisite of priced software. The solution was to take about 80percent of the code in the original resource manager and move it into the "base" free operating system.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 29 Jan 2005 11:43:47 -0700
Anne & Lynn Wheeler writes:
provided time-sharing services to people in corporate hdqtrs handling the most sensitive of corporate data as well as online access to various studnets from BU, Harvard, MIT, etc. around the boston area (i had previously mention that there were no known security type breaches ... there were a couple of denial of service attacks that were quickly dealt with).

one of the denial-of-service attack vulnerabilities in cp67 was looping channel programs. CCWTRANS created a copy of the virtual machine's channel program ... and if the virtual channel program "looped" .. the copy that was actually executed also looped.

One day we got a MIT student doing looping disk channel program which hung the system and required a system reboot. he then did it again as soon as the system was back up (couple minutes) ... we then terminated his account. He complained to his advisery that we had no right to terminate his account and furthermore, it was his right to crash systems as frequently and often as he wanted to.

the CMS disk diagnose i/o interface was primarily a (significant) performance enhancement ... recent post about CMS disk diagnose
https://www.garlic.com/~lynn/2005b.html#23

but it didn't preclude a user from generating a looping channel program and using standard I/O interface with disks.

Not too long afterwards, I had done pagemapped filesystem enhancements for cp67/cms
https://www.garlic.com/~lynn/submain.html#mmap

and allowed specifying that the virtual machine was not allowed to use standard SIO and/or channel programs with the filesystem.

I believe what the time-sharing service bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

was add some more code to the disk diagnose interface that rejected any kind of non-normal disk I/O sequence ... and also precluded using anything but disk diagnose interface for accessing disks.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

History of performance counters

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of performance counters
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 29 Jan 2005 13:19:15 -0700
Lee Witten writes:
This posting made me wonder about the history of performance counters in general, and the history of providing public information of their use.

I was an IBM employee in the late 80s / early 90s. At one point, I was working on UNIX for the 3090 mainframes, and I saw a guy carrying a listing several inches thick down the hall. I asked a co-worker who he was and what he was doing, and he said he was from IBM Research, and the listing was a dump of performance counters gathered when the kernel was running, and he was analyzing them. I asked how I could do this, and was told that was confidential info, and only certain priviledged folks could learn how.

At a similar point in time, I was logged on to a certain system within IBM, and I read that the first generation RS/6000s had some sort of performance counters that could only be read if you had a specially modified motherboard with special probes hooked up to read them. I never found out any more about them.

I was at DEC during the early Alpha days (1992 or so), and they had performance counters on the first Alpha chip and all the follow ons. I recall we documented how they worked via the usual programming references i.e. man pages for uprofile and kprofile on OSF/1 and documents on IPROBE for VMS. But I don't think we particularly documented how a third party could use them, although I think there was obscure information on the kprof system call in the OSF/1 man pages.

I recall reading there may have been some profile counter support on the first pentium (p5) and there definitely were on the second (p6).

So, I'm wondering if there are any earlier implementations of performance counters, and when they were publically described. Were the counters generally available to third parties, or where they only really usable via vendor-provided tools. Who was the first to provide the 'interrupt every N events of interest' style of performance counters? Are there any relevant papers, patents, etc?


back in the 360 & 370 days when you could still attach probes to the computer ... there was a special hardware box that had lots of probes ... which you could hook up all over the computer ... and reduce for all sorts of statistics. one of the things you could do was attach probes to the instruaction address and create a log of sampled instruction addresses ... in attempt to build up a profile of where the computer was spending all its time.

note it was also possible to display the current instruction address in lights ... and i believe it was boeing wichita that trained a video recorder on the instruction address lights as a "cheap" performance monitor.

a problem with TCMs and advancing technology ... it was no longer possible to place probes at will thruout the computer. This sort of eliminated the old-style performance monitors but also created a problem for field engineering. For years, field service/sengineering had a requirement that they could boot-strap diagnose hardware problems starting with scope probes. As a result, the service processors were born. Service processors were simpler computing technology that could be scoped ... and in turn the service processor(s) had builtin probes at manufacturing for everything else needing to diagnose hardwrae problems.

Initially, 3081 had a uc.5 service processor. The follow-on 3090 initially was going to a 4331 as a service processor running a highly customized version of vm/370 release 6. This was upgraded to having dual 4361s as the 3090 service processors, both running highly customized version of vm/370 release 6.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 29 Jan 2005 13:28:04 -0700
one might even extend the line of reasoning with biological psuedo-science feature analogy ... that C evolved a data-pattern based length metaphor for storage areas with defined data-patterns and no defined length feature for storage areas with no defined (or ill-defined) data patterns ... because the original C environment was rich in storage areas with data patterns and deficient in storage areas with undefined or ill-defined data patterns.

further extending the biological psuedo-science analogy ... one would observe that populations with vested interests in the evolved data-pattern based length metaphor (adequate for storage areas with defined data patterns) would be resistant to other length metaphors (that might possibly also be used for storage areas with undefined or ill-defined data patterns).

a slight correllary is that most length metaphors that are adeqaute for storage areas with undefined or ill-defined data patterns are also adeqaute for storage areas with well defined data patterns ... while the reverse is not necessarily true (i.e. the nul-terminated length metaphore for defined data pattern storage areas can be quite inadequate for storage areas with undefined data patterns).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2005 14:12:44 -0700
Joe Morris writes:
In this time frame, lots of the OS/VS2 shops would have been running SVS, not MVS. Admittedly, SVS ("Single Virtual Storage") was essentially MVT with virtual memory grafted on and the product was intended as a bridge for users moving into MVS, but MVS was still a big step. (Perhaps Lynn or others can provide a timeline for SVS vs. MVS.)

you could also tell the high-end machines ... 165, 168, 3033, etc were designed with OS/VS2 virtual memory in mind. one of the bits used to index the TLB (table look-aside buffer) was the 8mbyte bit.

when they did SVS ... they laid MVT kernel out in 8mbyte of the (24bit/16mbyte) virtual address space, leaving 8mbytes of virtual address space for loading and running applications. Part of this was that the standard os/360 paradigm was heavily pointer passing based ... so there was lots of code all over the place was dependent on addressing the specific areas that the passed pointers ... pointed to. As a result, using the 8mbyte bit for one of the bits used to index TLB entries ... met than half the TLB entries went to virtual addresses 0-7mbyte and half the TLB entries went to virtual addresses 8-15mbyte.

later when they went to MVS ... they replicated the virtual address space structure from SVS ... but creating a unique virtual address space for each application (an application under MVS would have thot it was running under SVS as the only application). The kernel continued to occupy 8mbytes of each virtual address space.

There was a problem tho ... MVT and SVS had a bunch of semi-privileged subsystem applications ... that were now resident in their own address space. Regular applications still used pointer-passing paradigm to request services of these subsystems functions ... however, they were now resident in different address spaces with no common addressability. The hack was to create something called the common segment (which came out of the 8mbytes of virtual address space reserved for application). Basically passed values were moved into the common segment so that the pointer passing paradigm continued to work.

The problem in the SVS paradigm ... was that all concurrently running applications had to share 8mbyte from 16mbyte virtual address space. The problem in the MVS paradigm ... was that the common segment area was cut out of the application 8mbyte area ... and had to be large enuf to accommodate all the different kinds of subsystems an installation might have running. In the late MVS/370/168 time-frame, it was common for large installations to have 4-5mbyte common segment areas (leaving only 3-4mbytes of virtual address space for each application).

So along comes 3033 ... and they come up with a new hack ... called dual-address space support. This allowed a normal application to pass a pointer to a semi-priviledge application running in a totally different address space ... and for that application to have access to both its own address space ... and the calling programs address space (allowing the pointer-passing paradigm to continue to work).

Dual-address space was generalized in XA to access registers and program call. Prior to XA there was still quite a bit of library code that would reside in the application address space ... and it was possible to directly call such code by picking up the address of the routine and doing a direct branch-and-link. Either making calls to sub-system applications (in another address space) and/or moving lots of the library stuff to another address space ... would require a kernel call ... and kernel interrupt routine decoding the kernel call, switching the virtual address space and a bunch of other gorp. program call and access registes was a mechanism that defined a hardware table with structure that enforced some number of rules about making direct calls (and returns) to code in different virtual address spaces (handling all the required changes to virtual address space pointers) ... eliminating kernel call overhead processing.

random past dual-address space and access register postings:
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001d.html#28 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001d.html#30 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#43 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2005 14:50:19 -0700
dutch writes:
Another amazing thing is the idea of large organizations using equipment that wouldn't be considered adequate for an email PC for Grandma today to run their entire operations. The most common machine in the survey is probably the 370/158. This was a one-MIPS machine. Most of these boxes had strings of disks adding up to a gigabyte or two, and 2 or 3 MB of main memory. You couldn't give away a PC that small today at a garage sale. And yes, mainframe architecture is I/O optimized and multi-channel and all that, but still I guarantee that OS/MVT running on an emulator on an average PC today will perform many, many times the speed of these original machines in overall throughput not just processor speed. No wonder performance was the number-one concern of all the SHARE members that mentioned their problem areas in the remarks. The demand for computing services vastly exceeded the supply, regardless of the money spent on it.

recent posting in comp.arch about personal computers
https://www.garlic.com/~lynn/2005b.html#19

my personal computer was 64kbyte 360/30 ... normally the university shutdown the computing center from 8am sat. until 8am monday. I got a key to the machine room and could have it all to myself for 48hrs on the weekend ... but it was a little hard after staying awake for 48hrs to go to monday classes.

anyway ... nearly 40 years later ... i'm typing this on (almost) 4gbyte machine ... in little less than 40 years i've doubled the number of memory address bits on my personal computer from 16 to 32.

the issue back then was price/performance ... the cost of the hardware versis the cost of programmers improving performance ... did it result in a net benefit. the cost of hardware has declined significantly while the cost of programmers has gone up (making it a lot harder to show any net benefit of improved system performance against the cost of people doing the improved performance.

there is also the equation of lost opportunity ... given scarce programming resources (regardless of the cost) is the benefit greater applying them to new feature/function versis improving current performance. back then ... once you got the application running ... it needed to run well.

also, back in the dark ages ... with the relative high hardware costs ... the number of different business processes that could be justified automating was much fewer (and it was easier to justify making the ones that could be justified, run better).

with increasing people costs and declining hardware costs ... it is possible to cost justify automating huge numbers of additional business processes ... making more sense to apply scarce programming resources to automating additional business processes (as opposed to optimizing existing automated business processes).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sat, 29 Jan 2005 15:14:06 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
If we believe that security defects are like other defects in this respect, then code reviews alone are not sufficient to remove security defects. (Actually, I suspect it is worse than that -- security defects are probably harder to spot in code reviews than other kinds of defects, so it is possible that the rate of security defects that go undetected by code reviews may be even higher than 40%.) This is consistent with my own experience.

security defects frequently are the abscence of something ... many defects are recognized as something being incorrect as opposed to something totally lacking. one might claim that it is frequently much harder to recognize the abscence of something ... as opposed to something done wrong.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sat, 29 Jan 2005 15:43:25 -0700
one such scenario is my dual-use attack on digital signatures.

the scenario was an infrastructure tha correctly and appropriately applied digital signatures for approving/authorizing something and they could prove that there was no bugs in their code or cryptography.

the problem is that digital signatures is nominally an authentication (and message integrity) paradigm ... while human signatures carry the connotation of having read, understood, approved, agree, and/or authorize that which is being signed. basic digital signature paradigm lacks any such operations ... to some extent it would have been much more appropriate semantic use to label it digital DNA ... rather than digital signature.

so they could prove that the message integrity hadn't been affected and that it actually originated from the correct place ... but they fail to demonstrate anything about whether a person has actually read, understood, agreed, approved, and/or authorized the contents being digitally signed.

the other problem is that given open public key environment ... one might conjecture that a person might actually use their public/private keys in straight authentication events .... having nothing at all to do with any described authorization environment under consideration.

some number of such authentication environments will be somewhat challenge/response ... with random data being presented for signing, the entity performing digital signature on the random data and returning it. the issue in the dual-use attack is that if an individual is ever capable of using their private key to digitally sign a random piece of data w/o having read, understood, agreed, authorized, and/or approved such random data ... then they have potentially compromised the use of that same private key for any approval/authorization operation.

so in addition to not providing any digital signature infrastructure that proves that the human has read, understood, agreed, approved, and/or authorized the content (as is customarilly understood to be implied with a human signature) ... they have not provided any safeguards to absolutely eliminate the chance that the same private key might ever be used in a authentication operation where the same private key signs possibly random data w/o the human having first read, understood, agreed, approved, and/or authorized the contents.

So a trivial solution might be to make it mandatory that every time a private key is used to create a digital signature ... that the data being signed has an appended disclaimer (as part of the data being signed) that states the existance of the digital signature in no way implies that the contents has been read, understood, agreed, approved, and/or authorized. To not have such a mandatory included disclaimer takes significant effort on the part of the person responsible for the digital signature ... i.e. some series of significant events that proves they have read, understood, agreed, approved and/or authorized the contents being digital signed (i.e. it is otherwise impossible to create a digital signature on purely random data ... it is mandatory that all data being digitally signed includes disclaimers and it is otherwise impossible to create a digital signature for data that doesn't contain such mandatory disclaimers).

misc. past dual-use posts
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#13 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)
https://www.garlic.com/~lynn/aadsm18.htm#32 EMV cards as identity cards
https://www.garlic.com/~lynn/2004h.html#51 New Method for Authenticated Public Key Exchange without Digital Certificates

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Keeping score

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Keeping score
Newsgroups: alt.folklore.computers
Date: Sat, 29 Jan 2005 17:42:13 -0700
Brian Inglis writes:
Talking of war stories: what's the longest continuously awake big "push" anyone's been on?

Mine was about 56 hours: 2 days 8 hours, to do final changes, tests, packaging, install, and demo a product at a customer site. Nobody on the team was fit to drive: had to take a train and cab, slept on the train on the way back; made the sale: customer was impressed by our product and effort, compared to their inhouse teams.


as per
https://www.garlic.com/~lynn/2005b.html#18 CAS and LL/SC
https://www.garlic.com/~lynn/2005b.html#54 The mid-seventies SHARE survey

fairly large number of times doing monday classes after 48hrs straight in the machine room.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

History of performance counters

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of performance counters
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 29 Jan 2005 23:15:26 -0700
artie writes:
Back in the Days of Yore (mid 70's), CP-V from SDS had a large number of software performance counters, and tuning parameters to go with them. Sufficiently privileged users could view and diddle things. Time quanta for timesharing users, for batch jobs, priority increments for the scheduler, tracking I/O operations per second, number ofthings in different states and queues, all sorts of fun stuff.

when i did the original dynamic adaptive stuff back when i was an undergraduate ... generalized resource allocation policy, attempting to dynamically determine the bottlenecks, default policy was called fair share, etc ... it got dropped into cp67. in the morphing from cp67 to vm370 ... a lot of the stuff got lost ... so i got an opportunity to re-introduce it with the resource manager ... recent posting about the resource manager
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey

.... however, some people said that I couldn't put out a resource manager that didn't have tuning knobs ... because the most modern state of the art performance management stuff had lots & lots of tuning knobs. unfortunately ... i had done a whole lot of stuff with self monitoring, dynamic adaptive, etc. ... so I put in some turning knobs to make it look a lot more modern ... give people lots and lots of manual activity ... instead of just doing the right thing dynamically adaptive continuously as configuration and workload changed. So there were official product documents describing all the formulas ... and all the source was readily available (i.e. source distribution and source maintenance back them).

so there was this little joke ... right in plain site in the code ... ... in OR relms it is sometimes referred to as degrees of freedom ... guess whether the manual knobs or the automatic dynamic adaptive feedback values had the greater degrees of freedom?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Sun, 30 Jan 2005 09:06:03 -0700
Peter Flass writes:
I believe that depreciation in those days was spread over 10 years. What would have been the financial implications of getting rid of the system after, sy, 5 years and having to pay the IRS? Would there have been interest or penalties on the money? In the very early days, corporations looked at mainframes the same way they looked at a piece of manufacturing equipment, and expected a similar lifespan (in decades for the machinery). A milling machine doesn't become obsolete very rapidly, unlike a computer, and no one had any other experience to judge from.

in the late 70s ... there was a corporate requirement that you needed a VP signature to get a 3270 on your desk ... we presented a business analysis that the cost of 3270, amortized over 3yrs, was about the same per month as a business telephone ... considered standard for people's desks (it turns out that many 3270s actually had more like 10yr lifetimes, sometimes more).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 30 Jan 2005 09:27:18 -0700
"Douglas A. Gwyn" writes:
But that has nothing to do with methods of string representation. No PL can tell you anything about an arbitrary region of storage; some structure is required.

however, nul-termination length paradigm is in-band, data pattern based ... requiring that the contents of storage area have defined structure.

numerous storage areas (like buffers) may have undefined or ill-defined data pattern contents which it makes it difficult to apply a data pattern based length paradigm to such storage areas

a simple example is a storage area that is currently an empty buffer ... and into which, there is some desire to copy the contents of another storage area ... which does have a defined data pattern and for which there is a "from" length that can be determined (because of a data pattern based length paradigm) ... but the length of target storage location has no determinable value (unless supplied).

one could hypothesize that many buffer overruns (and the source of many buffer overflow attacks) are because of operations that copy data from source storage area (for which there is a determinable length, using an in-band, data pattern based length paradigm) to a storage area (for which there is not a readily determinable length and/or an inccorrectly determined length, in part because the default paradigm of in-band, data pattern base length determination is not easily applicable to target storage areas that don't yet contain any defined data).

there would seem to be at least a design inconsistency and/or deficiency in the implementation of such copy operations where the source storage length is determinable but the target storage length is not determinable (modulo some manual assistance by the programmer).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 30 Jan 2005 13:19:06 -0700
Brian Inglis writes:
Lynn Wheeler noted that he had a buffer overrun problem in his ASCII terminal driver when plotters came along and his one byte character count was exceeded.

note that it required maintenance (source code changes) to the kernel to increase the max. size limit value (to >256) w/o changing the use of one byte operations. if it had been in a typed programming language .. rather than assembler, presumably the compiler would have caught the discrepancy.
https://www.garlic.com/~lynn/2005b.html#30

there are similar types of problems with register content management failures in assembler code ... that are pretty much eliminated by most (higher-level) programming languages.

most PL would allow some form of exceeding array bounds ... if the programmer iniitailized the index/counter/lengths incorrectly.

there tend to be much fewer problems with length initialization and out-of-bounds operations in environments where there are explicit values supported by the infrastructure from which to initialize length based operations (it is still not impossible to have length relates failures ... it is just that the frequency and probablity of them occurring are significanlty smaller ... and the programmer has to work much harder to make them happen).

one of the characteristics of the common C language environments is that there is a data-pattern based length metaphor supported for some areas of storage ... which are accessible to various kinds of length and array oriented operations.

a possible problem is that the data-pattern based length metaphor has a difficult time supporting length abstractions for areas of storage where the data pattern is undefined or ill-defined. a failure mode can frequently show up in copying the contents of one storage area to a different storage area. The length of the source storage area night be readily available to the infrastructure (assuming the source conforms to the in-band, data-pattern based length metaphor) ... but the length of the target storage area can frequently be indeterminate (w/o additional assistance from the programmer which appears to be prone to mistakes) since it may not yet contain any data that conforms to any pattern.

There seems to be a much lower probability of making mistakes related to storage area length operations when the infrastructure provides a standard and uniform length metaphor for descibing areas of storage. The issue in the common C language environment is that there is a common and widely used length metaphor for osme storage areas that is based on an in-band, data-pattern based length paradigm. However, an in-band data-pattern based length paradigm is difficult to apply to areas of storage that have undefined and/or ill-defined data patterns.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The mid-seventies SHARE survey

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The mid-seventies SHARE survey
Newsgroups: alt.folklore.computers
Date: Sun, 30 Jan 2005 13:48:28 -0700
dutch writes:
IBM's plans were revealed in memos during the antitrust trial. Ever wonder why IBM introduced the 3330 drives only for the 370s and not the 360s? Why no 370-compatible DAT box was offered for any 360? These weren't technical issues, they were decisions driven by business and financial concerns.

there was a 360 DAT box available on the 360/65 ... it was called the 360/67.

DAT/virtual memory was really expensive in 360 technology and tended to consume larger amounts of real memory .... but could more efficiently manage large amounts of real memory (in general the virtual memory versions of the kernels had larger fixed memory sizes and tended to have larger application space requirements for the minimum configuration). The 360-generation disk drives had much lower transfer speeds than the 3330, making 3330 really only practical on the higher-end 370s). There was also a recognized heavy channel busy burden doing 360 CKD seek/search/IO operations and as a result, 3330 also introduced new technology allowing channel disconnect during disk rotation operation ... which in turn required new kind of block multiplexor channel technology.

even retrofitting DAT hardware to 370/155 and 370/165 was a difficult and expensive process. The full 370 architecture called for some amount more stuff than was eventually made available to customers. There were some combined engineering, architecture and software meetings in POK where the 370/165 engineers said that it would delay announce and delivery by an additional six months if they had to engineer the full 370 virtual memory architecture specification. So as a result ... only a subset of the 370 virtual memory architecture was announced and shipped (based on what the 370/165 engineering schedule could actually handle).

some of these arguments could probably also be made about car manufactures ... why don't you see more car manufactures retrofitting 3-4 year old models with 2005 bodies and interiors.

If you are talking about field upgrade of existing 360s to accommodate all the various bits & pieces of technology introduced in 370 ... you could be talking about nearly a complete swap (except for possibly the covers).

most of the attention and effort was going to future system stuff (with 370 activity actually only by a somewhat minor side issue)
https://www.garlic.com/~lynn/submain.html#futuresys

problem was that future system failed and was eventually canceled w/o ever being announced (and then there was enormous effort to play technology catchup with 370 because of all the lost years spent on fs):
https://www.garlic.com/~lynn/2000f.html#16
https://www.garlic.com/~lynn/2003l.html#30
https://www.garlic.com/~lynn/2003p.html#25

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 30 Jan 2005 22:22:37 -0700
"Hank Oredson" writes:
It's interesting that not much is needed to avoid buffer overruns in the TCP/IP stack. A small number of carefully placed tests are sufficient. To do the full error checking, categorization and reporting might take as much code as the protocol stack itself. e.g. an ICMP response where appropriate, notification of lower and / or higher level processes (driver, application), event logging if desired. Just recently finished a "hobby" implementation of a somewhat specialized stack and a quick browse through the code looks like 30-40% is there to detect and handle error conditions. There is only minimal error logging capability.

mainframe tcp/ip stack was implemented using vs/pascal in the '80s. i don't know of any buffer overflow related problems in that implementation.

i had done the product rfc 1044 implementation for the stack. the nominal implementation got about 44kbytes/sec thruput pretty much consuming a 3090 processor. in tuning 1044 implementation at cray research, between a cray and a 4341-clone was able to drive it at essentially 4341 channel hardware speeds of 1mbyte/sec using only a modest amount of 4341-clone.

mist. past 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 31 Jan 2005 08:36:39 -0700
Brian Inglis writes:
As has been pointed out, C strings are unrelated to C buffer overflow vulnerabilities, where explicit length checks are required, but have obviously been omitted by the programmers.

in various copy scenarios .... from a string to an empty buffer, there is a infrastructure source length available based on nul-terminated, data-pattern based length paradigm ... however there is no infrastructure target length available.

there are other PL environments where there are both infrastructure source lengths and infrastrcture target lengths ... and the infrastructure utilizes both the source length and the target length in copy operations. these environments tend to have much lower incidence of buffer overflow ... because the standard copy operations have both source length and target length ... even when the programmer has forgotten to do explicit length checks.

in assembler language environments i've studied ... there has been some significant number of failures because the programmer has failed to do sufficient register content management. this is a failure mode that almost doesn't exist in any of the standard programming languages.

two long ago and far away bits of information

1) the choice of infrastructure nul-terminated strings for length paradigm was explicitly chosen because it saved on bytes in the representation and possibly registers and instructions in generated code (as compared to a infrastructure length paradigm using pointer plus explicit length)

2) environments that have a single and consistent paradigm across multiplicity of different constructs tend to have people making fewer mistakes

my observations have been that

1) it is possible to apply the nul-terminated string paradigm to areas of storage containing defined data patterns ... while it isn't very practical to apply nul-terminated string paradigm to areas of storage that lack any defined data pattern (like an empty buffer).

2) in many other programming languages that have chosen pointer+length based length paradigm for use in infrastructure storage length definitions, they have applied the same, consistent paradigm to both areas of storage with defined data patterns (like strings) as well as areas of storage that don't have defined data patterns (like empty buffers). these environments tend to have much lower incident of buffer overflow and much lower incident of (successful) buffer overflow attacks. A specific example is that in these environments, a copy operation of some string to an empty buffer, the infrastructure has determinable lengths (and tends to make use of them) for both the source data area and the target data area (whether or not they've been explicitly provided by the programmer).

so a hypothetical question ... did the explicit choice of a data-pattern based length paradigm (in the C language environment which only supports strings) inhibit the development of a infrastrcture length paradigm that applied to areas of storage w/o defined data patterns (like buffers). this is compared to other programming environments which have a infrastructure length paradigm that is consistant and uniform applicable to all kinds of storage areas (strings and buffers).

Or conversely, if the explicit C decision had been to choose a pointer+length representation for strings ... would that paradigm implementation also have naturally extended to all storage areas (including buffers). If C had a single, consistant infrastrucutre length paradigm that was applicable to all areas of storage ... and was in use by basic infrastructure operations ... would C have frequency of buffer overflow vulnerabilies comparable to other enviornments?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 31 Jan 2005 08:41:04 -0700
"Tom Linden" writes:
That is not quite correct for IBM also PL/I and PL8 for VMS add PL/I Traditionally Code generators have been considerably more sophisticated than assemblers

and vs/pascal ... which was used extensively in the 80s and either a 801 backend was created for vs/pascal ... or a vs/pascal frontend was created for pl.8 (in fact there may have been both a C frontend and a vs/pascal frontend created for pl.8).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 31 Jan 2005 09:06:20 -0700
Anne & Lynn Wheeler writes:
the amd hardware and windows support is the inverse. there have been execute only support around for some time ... but that didn't necessarily prevent other things from being executed. the inverse is flagging areas as no-execute (i.e. i-fetch won't occur as opposed to only i-fetch works).

news item today on no-execute
http://www.cbronline.com/article_news.asp?guid=08DB0B61-E0F6-4051-8EAA-DD06D9196808
A Russion Hacker claims SP2 no-execute flawed

Russian security research outfit Friday published details of what it said are flaws in Windows XP Service Pack 2's memory-protection features, a key security upgrade to the operating system.


... snip ...

misc. recent postings mentioning no-execute
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2005b.html#39 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#32 8086 memory space [was: The Soul of Barb's New Machine]

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, index - home