26. Steven E. Brenner, "cgi-lib.pl Home Page" - Here is a long list of Perl instruction books as well as documentation for cgi-lib.pl, a very popular Perl library for processing the input of HTML+ forms.
27. Yahoo!, "Gateways (Yahoo!)" - A collection on searching gateway scripts, as well as a number of CGI examples are found here.
28. Yahoo!, "CGI - Common Gateway Interface (Yahoo!)" - A large collection of CGI scripts.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
HELLO, WORLD!
The simplest of scripts.
It is a tradition to begin any programing demonstration with a "Hello, World!" example; an example that displays a simple text message. That is exactly what our first example does here. Give it a try:
Hello, World!
The script itself is only two lines long and saved as 01-helloWorld.cgi:
#!/usr/local/bin/perl
print "content-type: text/html\n\nHello, World!";
FILENAMES
The first thing to keep in mind is the name of the script, specifically its extention (.cgi). In this case .cgi denotes an executable script to the HTTP server as defined in its MIME types table. If your script is not saved with a defined extenstion, then, by default, the content of the script will be returned to the client and not the result of the script's execution.
WRAPPERS
The first line of the script is a comment. All lines begining with a hash mark (#) are considered comments. On Unix platforms, if the first line of a script is a comment and followed by an exclamation point (! and sometimes called a "bang"), then the text following the exclamation point is the considered to be the application used to interpret the balance of the text. In this case, the application "/usr/local/bin/perl" is called. This convention is called a "wrapper." The Macintosh and Windows platforms also need wrappers and are available from the pages describing their respective Perl ports.
OUTPUT
The second line of the script represents the real guts of the demonstration:
1. The print statement outputs the content of everything between the quote markes and until the end of the line (;)
2. The content-type: text/html\n is the "magic line" necessary for your WWW browser to know how to handle the incoming text. ("Remember the HTTP headers the Introduction?").
3. A single carriage return (\n) delimits the HTTP header from its data just like Internet email (SMTP) messages.
4. Finally, the text Hello, World! is printed.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
CREATING VALID HTML
Writing polite code.
The previous example was not very polite. In fact, it out and out lied about its content saying its data was HTML when it was plain text. The second example demonstrates how to conform a bit more to HTTP and HTML standards by including more information about the script's content and correctly formating the output. Try the following script, 02-validCoding.cgi:
Hello, World! #2
The script (below) has the exact functionality as the first example, but this one is more truthful about its output's content and it formats the results. View the source code of the output of Hello, World! and Hello, World! #2 to see the difference.
THE CODE
Here is the script's code:
#!/usr/local/bin/perl
# 02-validCoding.cgi
# This script demonstrates valid HTTP/HTML coding output.
# Eric Lease Morgan
# from Becoming a World Wide Web Server Expert
# http://sunsite.berkeley.edu/~emorgan/waves/
# 04/01/97 - renamed file as a .cgi
# 01/20/97 - Martin Luther King Day
# declare the MIME standard
$header = "MIME-Version: 1.0\n";
# describe the MIME type
$header .= "Content-type: text/html\n";
# terminate the HTTP header
$header .= "\n";
# create an HTML file
$html = "\n";
$html .= "\n";
$html .= "\n";
$html .= "Example #2 - Hello, World!\n";
$html .= "\n";
$html .= "\n";
$html .= "\n";
$html .= "Hello, World!\n";
$html .= "\n";
$html .= "\n";
# output the header and html content
print "$header$html";
# exit gracefully
exit;
HOW IT WORKS
Here is how the script works:
1. The first few lines of the script are comments. The first line is the script's wrapper. The next few lines provide some documentation.
2. The next few lines build the HTTP header ($header) describing the MIME version and type that is being returned.
3. Next, the variable $html is built including valid HTML tags.
4. Then the variables representing the HTTP header and HTML data are returned with a print statement.
5. Finally, the script quits with the exit statement.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
USING ENVIRONMENT VARIABLES
Getting and using rudimentary input.
Unless your scripts are intened to only display something like the date and time, you will be wanting to get some input from your users in order to create dynamic pages or do some other sort of processing.
As outlined in the Introduction, a proper WWW browser sends requests for data to an HTTP server. These requests also include information about the browser's computing environment. These things might include the Internet name and IP address of the browser's computer, what sort of data the browser can accept, the name of the browser, and the URL the browser is currently displaying. To Perl, this set of information is known as environment variables and can be quite useful in CGI scripting.
EXAMPLE #1
The following example (03-environment.cgi) extracts the data sent by the WWW browser and then creates some simple output based in this data:
"Show me my environment"
As the script demonstrates, quite a lot of useful information is available to CGI scripts through the environment variables. Based on this information alone a person could create simple authentication scripts, or dynamic HTML pages based on the client's operating system or the preferred image format.
EXAMPLE #2
By appending a question mark (?) and some text to the script's URL, an HTML author can supply more input for the exact same script as demonstrated below. (Notice how the value of query string changes from Example #1 and Example #2.)
Rudimentary HTML input
THE CODE
#!/usr/local/bin/perl
# 03-environment.cgi
# This script outputs the environment variables.
# Eric Lease Morgan
# from Becoming a World Wide Web Server Expert
# http://sunsite.berkeley.edu/~emorgan/waves/
# 04/01/97 - renamed file as a .cgi
# 01/20/97 - Martin Luther King Day
# create the http header
$header = "MIME-Version: 1.0\n";
$header .= "Content-type: text/html\n";
$header .= "\n";
# initalize the html output
$html = "\n";
$html .= "\n";
$html .= "\n";
$html .= "Example #3 - Environment variables\n";
$html .= "\n";
$html .= "\n";
$html .= "\n";
# using brute force, extract each environment variable
$html .= "Environment variables\n";
$html .= "\n";
$html .= "- server software: $ENV{SERVER_SOFTWARE}\n";
$html .= "
- gateway interface: $ENV{GATEWAY_INTERFACE}\n";
$html .= "
- server protocol: $ENV{SERVER_PROTOCOL}\n";
$html .= "
- server name: $ENV{SERVER_NAME}\n";
$html .= "
- server port: $ENV{SERVER_PORT}\n";
$html .= "
- authorization type: $ENV{AUTH_TYPE}\n";
$html .= "
- remote user: $ENV{REMOTE_USER}\n";
$html .= "
- remote address: $ENV{REMOTE_ADDR}\n";
$html .= "
- remote host: $ENV{REMOTE_HOST}\n";
$html .= "
- remote identity: $ENV{REMOTE_IDENT}\n";
$html .= "
- request method: $ENV{REQUEST_METHOD}\n";
$html .= "
- script name: $ENV{SCRIPT_NAME}\n";
$html .= "
- path: $ENV{PATH_INFO}\n";
$html .= "
- path translation: $ENV{PATH_TRANSLATED}\n";
$html .= "
- query string: $ENV{QUERY_STRING}\n";
$html .= "
- content type: $ENV{CONTENT_TYPE}\n";
$html .= "
- content length: $ENV{CONTENT_LENGTH}\n";
$html .= "
- http accept: $ENV{HTTP_ACCEPT}\n";
$html .= "
- http user agent: $ENV{HTTP_USER_AGENT}\n";
$html .= "
- http referer: $ENV{HTTP_REFERER}\n";
$html .= "
- http cookie: $ENV{HTTP_COOKIE}\n";
$html .= "
\n";
# add a cute line demonstrating the use of one such variable
$html .= "Hello there, $ENV{REMOTE_HOST} It's nice to meet you!\n";
# finish the html
$html .= "\n";
$html .= "\n";
# output the header and html content
print "$header$html";
# exit gracefully
exit;
HOW IT WORKS
The operation of this script is not much different from the operations of the previous examples:
1. Like the previous examples, the script begins with a wrapper and some documentation.
2. It then builds the HTTP header
3. The creation of the HTML is divided into four parts
i. Like before, the HTML is initialized with standard codes
ii. Next each value in the array %ENV is extracted and added to the HTML
iii. A simple line is generated including the value of one of the environment variables
iv. The HTML codes is closed
4. The HTTP header and HTML data returned to the server
5. The script quits
----------------------------------------------------------------------------
----------------------------------------------------------------------------
GETTING INPUT FROM FORMS
FORMs allow for user-friendly input.
In the "old days," there was the HTML tag. By inserting this tag into the HEAD of your HTML documents, a text field appeared on your page allowing you to supply input for a script. The technique still works but is limiting. The developers at NCSA created the CGI specification providing the means for getting a wider variety of input called FORMs. At the time, this new development was called HTML+.
FORMs provide multiple types of user input including:
o check boxes
o hidden variables
o pop-up menus
o radio buttons
o scrolling lists where one or more items can be selected
o single and multiple line text fields
Each of these input elements assign data to programer-defined variables in the FORM. When the FORM is "submitted", these variables and their contents are sent to your CGI script for processing.
Each FORM must have a begining and ending FORM tag. The FORM tag must have an ACTION attribute. The ACTION attritute is a URL pointing to your CGI script. The FORM tag optionally includes a METHOD attribute whose value is either GET, POST, or PUT. The default value is GET.
The differences between GET and PUT are subtle. FORMs using GET can send a limited amount of data to the remote script and these FORMs display their contents as a URL in your browser's location field. FORMs using POST can send much more data to the remote script and do not display their contents as URLs. The PUT method is not widely implemented by HTTP servers yet. It is used to copy files from your computer to the host.
Examples #1 and #2 (below) illustrate some of the subtle differences between GET and POST. Both examples have an ACTION attribute pointing to the previous script in the "Using Environment Variables" section. Notice the differences in output.
Examples #3 and #4 use the same GET and POST techniques as #1 and #2, but these scripts produce output that begins to be useful.
EXAMPLE #1
This FORM's ACTION attribute points to the environment variable script from the "Using Environment Variables" section and its METHOD attribute is GET.
Name?
Address?
EXAMPLE #2
This FORM's ACTION attribute points to the environment variable script from the "Using Environment Variables" section and its METHOD attribute is POST.
Name?
Address?
EXAMPLE #3
This FORM's ACTION attribute points to a script that takes the form's input and does some simple processing. It's METHOD attribute is GET.
Name?
Address?
EXAMPLE #4
This FORM's ACTION attribute does the same processing as the example above. It's METHOD attribute is POST.
Name?
Address?
THE CODE
#!/usr/local/bin/perl
# 04-gettingInput.cgi
# This script handles input via GET and PUT from forms.
# It also does some simple processing
# on the REMOTE_HOST environment variable.
# Eric Lease Morgan
# from Becoming a World Wide Web Server Expert
# http://sunsite.berkeley.edu/~emorgan/waves/
# 04/01/97 - renamed file as a .cgi
# 01/20/97 - Martin Luther King Day
# include a cgi processing library.
# consider also cgi.pm
require "cgi-lib.pl";
# extract the input from the form
# and put it into an array, @input
&ReadParse (*input);
$n = $input {'name'};
$a = $input {'address'};
# determine whether or not the user is from NCSU
$ncsu = 0;
$h = $ENV{REMOTE_HOST};
if ($h =~ /\.ncsu\./i) {$ncsu = 1}
if ($h =~ /^152\./) {$ncsu = 1}
# create the http header
$header = "MIME-Version: 1.0\n";
$header .= "Content-type: text/html\n";
$header .= "\n";
# start the html
$html = "\n";
$html .= "\n";
$html .= "\n";
$html .= "Example #4 - Getting input from FORMs\n";
$html .= "\n";
$html .= "\n";
$html .= "\n";
$html .= "You said your name was $n and your email was $a.
\n";
$html .= "\n";
# echo whether or not they are from NCSU
if ($ncsu) {
$html .= "According to your computer's Internet name or IP address, ";
$html .= "you are a student, staff, or faculty member of NCSU.
\n";
}
else {
$html .= "According to your computer's Internet name or IP address, ";
$html .= "you are not a student, staff, or faculty member of NCSU.
\n";
}
# finish the html
$html .= "\n";
$html .= "\n";
# output the header and html
print "$header$html";
# quit gracefully
exit;
HOW IT WORKS
This example builds on the previous examples:
1. The script is initialized and documented.
2. A perl library (cgi-lib.pl and described below) is required.
3. The contents of the form are parsed (&ReadParse (*input);) by the library and the results are assigned values ($n = $input {'name'}; and $a = $input {'address'};).
4. The client's host name is determined ($h = $ENV{REMOTE_HOST};).
5. Using regular expressions, the host name is evaluated for a specific strings (if ($h =~ /\.ncsu\./i) {$ncsu = 1} and if ($h =~ /^152\./) {$ncsu = 1}).
6. The HTML data is initialized and built as in the previous examples.
7. The HTTP header and HTML data are returned to the server.
8. The script exits.
Obviously, this is script is much more complicated than the previous examples, but this is the first script that approaches real-world applicability. The most important difference between this script and the previous examples is the inclusion of the Perl library, cgi-lib.pl. This library, written by Steve Brenner, removes much of the complexity of CGI scripting by decoding and parsing the input of forms. CGI.PM, another more robust and full-featured CGI scripting library by Lincoln Stein, is an alternative to cgi-lib.pl and takes advantage of Perl5's object oriented nature. Either library is an indespensible tool for your CGI scripting needs.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
SERVER MAINTENANCE
Here methods for maintaining URL integrity and logfile analysis are discussed.
Believe it or not, the easy part of HTTP servers is bringing them up in the first place. The hard part is making them run smoothly after the initial installation. This is akin to maintaining your OPAC's database, weeding your collection, refining your bibliographic instruction techniques, and generating reports on usage. Because of this, truely useful HTTP servers are sometimes few and far between. Because of this it takes a commitment by your institution to not only purchase any necessary hardware, but more importantly, commit time to the server's upkeep.
SUBECTIONS
1. URL integrity
2. Analyzing logfiles
SEE ALSO
1. Boutell.Com, Inc., "Wusage" - "Wusage is a statistics system that helps you determine the true impact of your web server. By measuring the popularity of your documents, as well as identifying the sites that access your server most often, wusage provides valuable marketing information. Practically all organizations, whether commercial or educational or nonprofit, need solid numbers to make credible claims about the World Wide Web. Wusage fills that need."
2. Gisle Aas, "LIBWWW-PERL-5" - "The libwww-perl distribution is a collection of Perl modules which provides a simple and consistent programming interface (API) to the World-Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients, thus libwww-perl said to be a WWW client library. The library also contain modules that are of more general use."
3. Roy Fielding, "wwwstat and splitlog" - "The wwwstat program will process a sequence of HTTPd common logfile format (CLF) access_log files and output a log summary in HTML format suitable for publishing on a website. The splitlog program will process a sequence of CLF (or CLF with a prefix) access_log files and split the entries into separate files according to the requested URL and/or vhost prefix."
4. Roy Fielding, "MOMSpider: Mulit-owner Maintenance Spider" - "MOMspider is a web-roaming robot that specializes in the maintenance of distributed hypertext infostructures (i.e. wide-area webs). The program is written in Perl and, once customized for your site, should work on any UNIX-based system with Perl 4.036."
----------------------------------------------------------------------------
----------------------------------------------------------------------------
URL INTEGRITY
This section describes how to use a link checker named MOMspider.
There is nothing more frustrating to an Internet surfer than error "404", file not found. The dynamic nature of the Internet make the elimination of this error a challenge, to say the least. This is why it is imperitive for you to constantly check the validity of your links. This is especially true if your site collects pointers to other sites.
There are a number of free and fee link checkers available on the Internet. One of the very first and still quite useful is MOMspider. Written by Roy Fielding in 1994, MOMSpider or "Multi-Owner Maintenance spider", transverses your WWW site reporting on broken and redirected HTTP-based links. It is written in Perl, and therefore an application available for just about any operating system.
To get MOMSpider up and running, you must:
1. Install Perl
2. Install libwww-perl (or LWP), a library Perl-based Internet routines
3. Install MOMSpider
4. Write MOMSpider instruction files
Installing Perl is something you may have already have on your computer. Installing libwww-perl is merely a matter of downloading the archive and running the make command. Similarly, installing MOMspider is just as easy. Just download the archive.
The most difficult thing in using MOMspider is the creation of instruction files. Instruction files are files describing what sets of HTML pages should be checked and how to report on what it finds. Below is an simple instruction file to check the validity of the URL at
EXAMPLE
# This is a simple MOMspider instruction file
# intended to check for broken links in Index Morganagus
# Eric Lease Morgan
# 02/16/97 - first cut
AvoidFile /home/emorgan/.momspider-avoid
SitesFile /home/emorgan/.momspider-sites
EXPLANATION
In a nutshell, this instruction file tells MOMspider to:
1. Avoid transfering anything found in the .momspider-avoid file
2. Keep track of visited sites in the .momspider-sites file
3. Begin checking links in http:// sunsite.berkeley.edu/~emorgan/morganagus/serial-list.html
4. Create a report named http:// sunsite.berkeley.edu/~emorgan/morganagus/spider-report.html whose local file name is /home/emorgan/public_html/morganagus/spider-report.html
5. Send everything to eric_morgan@ncsu.edu
Once run, MOMspider creates a report and sends summary information to the specified email address looking something like this:
This message was automatically generated by MOMspider/1.00 after a
web traversal on Sun, 16 Feb 1997 20:08:30
The following parts of the Index infostructure may need inspection:
Broken Links:
For more information, see the index at
Examining the data at then
provides more detailed information.
MOMspider works; it does exactly what it was designed to do. Run regularly, it can help sigificantly with the integrity of your WWW server.
Another alternative is the installation of a PURL (Persistant URL) server. See . The PURL server, written and freely distributed by OCLC, is an HTTP server mapping real URLs to virtual URLs. It works much like the Internet names assigned to computers allowing you to keep your URLs (PURLs) constant and only updating your database mapping of your virtual URLs to real URLs.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
ANALYZINIG LOG FILES
This section outlines methods for doing rudimentary logfile analysis.
HTTP servers generate a lot of logfiles. Depending on your server, it may generate lists of what was accessed, who accessed it, and with what browser. In a nutshell, there are basically two ways to analyse this information. The first and most popular is to apply some sort of analysis tool to your logfiles. Some of these tools include Wusage, Getstats, and wwwstat. But the most popular seems to be Analog.
The second approach is to import your log files into a database and then query the database to create reports. This second approach is less popular, more difficult to implement, but may give you more exact information concerning the use of your server. To make the job a bit easier, you may want to try tabulate, a perl script that outputs tab-delimited text from "common log format" logfiles. If you use Apache, then you can configure it to create tab-delimited files automtically.
ANALOG
Analog is an application that can analyse your log files. It runs on Unix, Windows, and Macintosh computers. It can generate HTML or plain text output. All of its options are compiled into the application, can be overridden through a configuration file, or even the command line. Its fast and its free.
The most common structure of HTTP server logfiles is the "common logfile format." This format has the following structure:
remotehost rfc931 authuser [date] "request" status bytes
Where:
o remotehost is the name of the computer accessing your server
o rfc931 is the name of the remote user (usually blank)
o authuser is the authenticated user
o [date] is... the date
o "request" is the URL requested from the server
o status is the error code generated from the request
o bytes is the size (in bytes) of the data returned
Analog can read the "common logfile format" (as well as others) and generate reports accordingly.
To use Analog, first you must download and uncompress the archive from the Analog home page or any of its many mirror sites.
If you are using a Unix computer, you will have to edit the analog.h file to define the application's defaults. You must then make (compile) the application. Don't fret. Its easy. The Windows and Macintosh version come pre-compiled and require little extra configuration.
The next step in using Analog is the editing of its configuration file, analog.cfg. This file directs Analog how to process your logfiles. The most important option is LOGFILE telling Analog the exact location of the file(s) to analyse. The next most important option is OUTFILE telling Analog where to save its output. You will also want to edit HOSTNAME, HOSTURL, and BASEURL so your resulting reports make sense.
Next, running Analog will examine your logfile, politely report any errors, create your report, and exit. Furthermore, it will do this quickly!
After playing with Analog for a little while, you may want to explore fine tuning some of its miriade of options thus customizing your reports to your needs. Such options include dates, times, host name exclusion, text/HTML graphic/text output, and browser types, etc.
Analog is worth much more than what you will pay for it.
SEE ALSO
1. Boutell.Com, Inc., "Wusage" - "Wusage is a statistics system that helps you determine the true impact of your web server. By measuring the popularity of your documents, as well as identifying the sites that access your server most often, wusage provides valuable marketing information. Practically all organizations, whether commercial or educational or nonprofit, need solid numbers to make credible claims about the World Wide Web. Wusage fills that need."
2. Eric Lease Morgan, "tabulate" - A rudimentary Perl script that takes "common log format" logfiles and outputs tab-delimited text.
3. Kevin Hughes, "Getstats" - "Getstats (formerly called getsites) is a versatile World-Wide Web server log analyzer. It takes the log file from your CERN, NCSA, Plexus, GN, MacHTTP, or UNIX Gopher server and spits back all sorts of statistics."
4. Roy Fielding, "wwwstat and splitlog" - "The wwwstat program will process a sequence of HTTPd common logfile format (CLF) access_log files and output a log summary in HTML format suitable for publishing on a website. The splitlog program will process a sequence of CLF (or CLF with a prefix) access_log files and split the entries into separate files according to the requested URL and/or vhost prefix."
5. Stephen Turner, "Analog: A WWW Server Logfile Analysis Program" - Analog seems to be the most popular logfile analysis program. It is available for Unix, Windows, and Macintosh computers. Its fast and flexible, but just a tiny bit difficult to configure.
6. W3, "Logging in W3C httpd" - This page describes the format of log files for the WC3 server, and specifically the "common log format."
7. Yahoo!, "Log Analysis Tools (Yahoo!)" - Here is a collection of logfile applications and utilities.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
PEOPLE CONNECTION
None of this happens without people and for people.
Technology does not exist in a vacuum. The purpose of bringing up HTTP services is to benefit your constituents (people). Additionally, your services won't go very far unless you have people to staff them. The following sections outline how you can staff your HTTP services and how you can use your OPAC and your HTTP services to the benefit of your patrons.
SUBSECTIONS
1. Staffing
2. Your OPAC
----------------------------------------------------------------------------
----------------------------------------------------------------------------
STAFFING
Dedicate staff resources as well as computer resources to your HTTP server initiatives.*
As stated previously, bringing up an HTTP server is easy. The hard part is maintaining it. This requires staff. If you plan to provide HTTP services, then plan for staff to manage them.
The World Wide Web is primarily a communications medium. For it to be most effective, it requires the skills of various professions. These skills include:
o computer networking and administration
o copy editing
o graphic design and illustration
o information collection and organization
o programing
o writing
Individuals possesing expert skills in more than one of these areas are few and far between. People possessing all of these skills are practially non-existant. Consequently, the staffing for robust HTTP services requires multiple personnel.
The ideal solution is to create a new department (a "Web Publishing Unit") and hire experts in each of the areas above to provide your HTTP services. A more practical approach would be to pull staff from existing departments who posses the needed skills and have these people work as a team.
At the very least, you will want your team to include:
o 1 graphic artist who understands HTML and illustration
o 1 editor who creates and modifies content
o 1 computer programmer/administrator who keep the machine running smoothly
If your organization is heirarchial, then you will want to consider adding a manager to the Web Publishing Unit to do supervision and maintain your institution's "vision."
HTTP services do not exist in a vacuum. It will be paramount for your Web Publishing Unit to communicate with other staff and its constituency. Therefore you might want to have a "Web Board" with liasons from each of your institution's departments. These people will bring to the table issues for implementation as well as content for your server.
Disadvantages
This model is not without its problems. First, you might not have the monetary resources to create a free standing unit responsible for server maintenance.
Second, this model would need to be created from scratch and it would not necessarily fit neatly into your current organizational structure. It would mean another hierarchy of some sort for staff to traverse.
Advantages
There are a number of advantages to this model. First, a freestanding unit like this one with several levels of expertise would assure the consistency and quality of the content while distributing the functions across several staff each with important and different roles.
Second, having a centralized unit would allow for the efficient purchase and use of highly specialized software: authoring tools, analysis software, graphic support equipment, link checkers, etc. Other staff throughout the library could concentrate on developing Web content themselves, discovering and helping disseminate other resources, connecting with faculty or other partners for collaborative opportunities and more, all without having to learn the tools in question. This is not unlike your current organizational structure which leaves certain tasks to a cadre of specialized staff with whom others interact in a known workflow.
Third, to say HTTP services represent a powerful communication medium is an understatement. The ever-increasing importance of the HTTP services to your institution's mission requires full-time commitment for the identical reasons that units such as reference, circulation, cataloging, and acquisitions do. Staying current with Web technologies, supervising staff assistants, interacting with staff and users, and pursuing the your institution vision are not elements to be passed on to either a committee or made partial responsibilities in an already existing job.
-------------------------------------------------------
* This section borrows heavily from the NCSU Libraries internal document Eric Morgan, Keith Morgan, and Doris Sigl, "Taming the Web: Structured Web Management in the NCSU Libraries" June 1996.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
YOUR OPAC
Your OPAC can form the founation for your HTTP services.
For decades card catalogs were the heart of library services. With the advent of computers the card catalogs were turned into online public access catalogs (OPACs). In most cases, these databases are still of fundamental importance to library services.
As you know, machine readable cataloging (MARC) records make up the content of your OPAC. In 1994 a new MARC field was defined, the 856 field for Electronic Location and Access. (See .) This field describes:
The information required to locate and retrieve
an electronic item. Use to provide information
that identifies the electronic location
containing the items or from which the resource
is available. In addition field 856 may be used
for linking to an electronic finding aid.
One of the most useful subfields of 856 is subfield u. Subfield u is as place holder for the URL of an electronic resource. The existance of the 856 field and subfield u provide the means for cataloging Internet resources.
The development of CGI interfaces to OPACs provide the means to literally link your users with fulltext items from your catalog. One of the very first such interfaces was written by Tim Kambitsch for the DRA system. (See .) Since then a number of vendors have created CGI interfaces to thier databases. An incomplete list is available at .
The combination of these standards and technologies allow you to take the "card catalog" where it hasn't been before. Now, more then ever it can become a finding tool as opposed to inventory list. Taking the process one step further, CGI scripts could be placed in 856 fields to provide access to more specialized resources. For example, a library could collect electronic texts, mark them up with SGML, catalog them, and provide access to the texts as well as searching mechanisms completely through the OPAC. Another option is to create a CGI script that download a MARC record in communications format from the OPAC to a librarian's desktop. This would allow libraries to share their MARC record without going through a bibliographic utility. If the MARC records in question described Internet resources, then this might even encourage more libraries to catalog electronic items.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
WEBLIOGRAPHY
BROWSABILITY
1. Aslib, Proceedings of the International Study Conference on Classification for Information Retrieval (London: Aslib, 1957)
2. Bohdan S. Wynar, Introduction to Cataloging and Classification (Libraries Unlimited: Littleton CO, 1980) pg. 394
3. Derek Langridge, Approach to Classification for Students of Librarianship (Hamden, Connecticut: Linnet Books, 1973)
CGI SCRIPTING
1. "Overview of CGI" - "This page contains pointers to information and resources on the Common Gateway Interface, a standard for the interface between external gateway programs and information servers."
2. "FastCGI" - FastCGI is a new, open extension to CGI that provides high performance for all Internet applications without any of the limitations of existing Web server APIs.
3. "Perl Language Home Page" - This is the offical home page for Perl
4. "[Perl Ports]" - This will take you to a randomly chosen FTP site hosting the Macintosh and Windows-based ported versions of Perl.
5. O'Reilly & Associates, Inc., "Software Library - Extras" - Here is a set of CGI resources specicically for WebSite.
6. Chuck Shotton, "Using FileMaker Pro with MacHTTP" - An archive with sample forms and CGI that shows how to hook MacHTTP to FMPro.
7. Chuck Shotton, "Writing Search Engines for MacHTTP " - This points to an archive containing C source code for a sample application that performs searches in conjunction with MacHTTP using the "srch" AppleEvent.
8. Danny Goodman, Complete AppleScript Handbook (Random House: New York, 1994)
9. Dave Winer, "Frontier Community Center" - "Frontier is a scripting system for the Macintosh. Lots of features, lots of verbs. It used to be a commercial product, but now it's free. Why? Because I want Frontier to have a shot at becoming a standard. I think it'll be fun!"
10. Derrick Schneider, Tao of AppleScript (Hayden Books: Carmel, IN, 1993)
11. Gisle Aas, "LIBWWW-PERL-5" - "The libwww-perl distribution is a collection of Perl modules which provides a simple and consistent programming interface (API) to the World-Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients, thus libwww-perl said to be a WWW client library. The library also contain modules that are of more general use."
12. John G. Cope, "Win-httpd CGI-DOS" - Here is a wrapper for Perl scripts written for DOS/Windows machines.
13. Lincoln D. Stein, "CGI.pm" - This is Perl 5 module for creating and processing HTML+ forms.
14. Lincoln D. Stein, "CGI::* Modules for Perl5" - Here is a collection of Perl libraries for creating Perl-based CGI scripts.
15. Matt Wright, "Matt's Script Archive" - A collection of Perl scripts as well as lists of other Perl script collections.
16. Matthias Neeracher, "MacPerl and PCGI" - This points to the FTP archive for MacPerl and PCGI, a script inserting the necessary resources into a MacPerl script so it can be executed as a CGI script.
17. Meng Weng Wong, "Index of Perl/HTML archives" - "This is a list of Perl scripts and archives involving HTML."
18. NCSA, "Common Gateway Interface" - This is the official specification for CGI scripting.
19. NCSA, "Mosaic for X version 2.0 Fill-Out Form Support" - This is the original specification for what was then called HTML+ FORMS.
20. Robert Godwin-Jones, "Guide to Web Forms and CGI Scripts for Language Learning"
21. Roy Fielding, "WWW Protocol Library for Perl" - "libwww-perl is a library of Perl packages/modules which provides a simple and consistent programming interface to the World Wide Web. This library is being developed as a collaborative effort to assist the further development of useful WWW clients and tools."
22. Sandra Silcot, "MacPerl Primer" - "This Primer is intended to assist new users get started with Macintosh Perl, and to point out salient differences for experienced Unix Perlers. This Primer is not a language reference manual, nor does it replace Matthias's documentation or Hal Wine's Frequently Asked Questions (FAQ) about MacPerl. The primer assumes you have already obtained and installed MacPerl, and that you have read the MacPerl FAQ."
23. Selena Sol, "Selena Sol's Public Domain CGI Script Archive and Resource List" - "n the following pages we have included both working examples of our scripts as well as the text of the code so that you can have one window open with the code and the other with the working script. Hopefully this design will help you figure out how we did what we did, so that you can take the ideas and run with them for your own needs."
24. Selena Sol, "Selena Sol's Public Domain CGI Script Archive and Resource Library" - This is a very useful collection of free Perl scripts and libraries for use on our HTTP server.
25. StarNINE, "Extending WebSTAR" - This is an extensive list of scripts and "plug-ins" for WebSTAR.
26. Steven E. Brenner, "cgi-lib.pl Home Page" - Here is a long list of Perl instruction books as well as documentation for cgi-lib.pl, a very popular Perl library for processing the input of HTML+ forms.
27. Yahoo!, "Gateways (Yahoo!)" - A collection on searching gateway scripts, as well as a number of CGI examples are found here.
28. Yahoo!, "CGI - Common Gateway Interface (Yahoo!)" - A large collection of CGI scripts.
MAINTENANCE
1. Boutell.Com, Inc., "Wusage" - "Wusage is a statistics system that helps you determine the true impact of your web server. By measuring the popularity of your documents, as well as identifying the sites that access your server most often, wusage provides valuable marketing information. Practically all organizations, whether commercial or educational or nonprofit, need solid numbers to make credible claims about the World Wide Web. Wusage fills that need."
2. Gisle Aas, "LIBWWW-PERL-5" - "The libwww-perl distribution is a collection of Perl modules which provides a simple and consistent programming interface (API) to the World-Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients, thus libwww-perl said to be a WWW client library. The library also contain modules that are of more general use."
3. Roy Fielding, "wwwstat and splitlog" - "The wwwstat program will process a sequence of HTTPd common logfile format (CLF) access_log files and output a log summary in HTML format suitable for publishing on a website. The splitlog program will process a sequence of CLF (or CLF with a prefix) access_log files and split the entries into separate files according to the requested URL and/or vhost prefix."
4. Roy Fielding, "MOMSpider: Mulit-owner Maintenance Spider" - "MOMspider is a web-roaming robot that specializes in the maintenance of distributed hypertext infostructures (i.e. wide-area webs). The program is written in Perl and, once customized for your site, should work on any UNIX-based system with Perl 4.036."
READABILITY
1. Dave Raggett, "Introducting HTML 3.2" - "HTML 3.2 adds widely deployed features such as tables, applets and text flow around images, superscripts and subscripts while providing backwards compatibility with the existing standard HTML 2.0."
2. Dave Raggett, "HyperText Markup Language (HTML) " - This is a useful guide to other HTML pages.
3. HTML Writers Guild, "Advice for HTML Authors" - "This is a list of advice for HTML authors, aimed at helping people produce quality HTML. It is intended to educate HTML authors to the elements of good and bad HTML style, focusing on some common problems with current HTML on the Web. It does not seek to ``control'' Guild members, but rather to encourage them to adopt these practices in their everyday HTML construction."
4. HTML Writers Guild, "HTML Writers Guide Website" - "Welcome to The HTML Writers Guild Website, the first international organization of World Wide Web page authors and Internet Publishing professionals. Guild members have access to resources including: HTML and Web business mailing lists, information repositories, and interaction with their peers."
5. James "Eric" Tilton, "Composing Good HTML" - "This document attempts to address stylistic points of HTML composition, both at the document and the web level."
6. Jan V. White, Graphic Design for the Electronic Age, (Watson-Guptill : New York 1988)
7. Kevin Werbach, "Bare Bones Guide to HTML" - "The Guide lists every tag in the official HTML 3.2 specification, plus the Netscape extensions, in a concise, organized format."
8. Microsoft, "Microsoft Site Builder Workshop: Authoring" - This set of pages outline how to take advantage of HTML extentions with Microsoft's Internet Explorer.
9. Microsoft, "Microsoft Site Builder Workshop: Design/Creative" - Here you will find examples of page layout possibilities for HTML and Internet Explorer.
10. Mike Sendall, "HTML converters" - Here is a list of applications converting documents into HTML.
11. NCSA, "Beginner's Guide to HTML" - "The guide is used by many to start to understand the hypertext markup language (HTML) used on the World Wide Web. It is an introduction and does not pretend to offer instructions on every aspect of HTML. Links to additional Web-based resources about HTML and other related aspects of preparing files are provided at the end of the guide."
12. Netscape Communications Corporation, "Creating Net Sites" - This is a page to Netscape HTML extensions.
13. Robin Williams, The Non-Designer's Design Book (Peach Pit Press: Berkeley CA 1994)
14. Roy Paul Nelson, Publication Design, 5th ed. (Wm. C Brown: Debuque IA 1991)
15. Tim Berners-Lee, "Style Guide for online hypertext" - "This guide is designed to help you create a WWW hypertext database that effectively communicates your knowledge to the reader."
16. W3, "HyperTetxt Design Issues" - "This lists decisions to be made in the design or selection of a hypermedia information system. It assumes familiarity with the concept of hypertext. A summary of the uses of hypertext systems is followed by a list of features which may or may not be available. Some of the points appear in the Comms ACM July 88 articles on various hypertext systems. Some points were discussed also at ECHT90 . Tentative answers to some design decisions from the CERN perspective are included."
17. Yahoo!, "HTML Editors (Yahoo!)" - This is a list of HTML editors and guides to other lists of editors.
18. Yale Center for Advanced Instructional Media, "Yale C/AIM WWW Style Manual" - This is one of the more scholarly treatments of the subject.
SEARCHABILITY
1. "Web Server Search for Windows" - "WSS is a CGI back-end for Windows based Web servers that allows your clients to conduct simple queries on html files in an unlimited number of directories. The output is a listing of links containing the title, heading, or file name of files that contain the search string. You simply modify the search.ini file for the directories you want users to search, and insert a form into your page that includes the number of directories to search, a reference to these directores and a submit button. WSS takes care of the rest."
2. Chuck Shotton, "Using FileMaker Pro with MacHTTP" - An archive with sample forms and CGI that shows how to hook MacHTTP to FMPro.
3. Chuck Shotton, "Writing Search Engines for MacHTTP " - This points to an archive containing C source code for a sample application that performs searches in conjunction with MacHTTP using the "srch" AppleEvent.
4. Glimpse Working Group, "Glimpse" - "Glimpse is a very powerful indexing and query system that allows you to search through all your files very quickly. It can be used by individuals for their personal file systems as well as by organizations for large data collections. Glimpse is the default search engine in Harvest."
5. Kevin Hughes, "SWISH Documentation" - "SWISH stands for Simple Web Indexing System for Humans. With it, you can index directories of files and search the generated indexes. For an example of swish can do, try searching for the words "office and map" at EIT. All of the search databases you see there were indexed by swish. When you do a search, it's the swish program that's doing the actual searching."
6. Mic Bowman, et al., "Harvest Information Discovery and Access System" - "Harvest is an integrated set of tools to gather, extract, organize, search, cache, and replicate relevant information across the Internet. With modest effort users can tailor Harvest to digest information in many different formats from many different machines, and offer custom search services on the web."
7. Yahoo!, "Gateways (Yahoo!)" - A collection on searching gateway scripts, as well as a number of CGI examples are found here.
SECURITY
1. A.L. Digital Ltd., "Apache-SSL" - "Apache-SSL is a secure Webserver, based on Apache and SSLeay. It is licensed under a BSD-style licence, which means, in short, that you are free to use it for commercial or non-commercial purposes, so long as you retain the copyright notices. This is the same licence as used by Apache from version 0.8.15."
2. Brigitte Jellinek, "bjellis perl scripts" - This page hosts a few scripts used to modify username/password combinations for basic HTTP authentication.
3. CERT, "COPS" - "COPS is a unix security toolkit that analyzes your system security."
4. Lincoln D. Stein, "World Wide Web Security FAQ" - "It attempts to answer some of the most frequently asked questions relating to the security implications of running a Web server. There is also a short section on Web security from the browser's perspective."
5. Rutgers University Network Services www-security team, "World Wide Web Security" - "This document indexes information on security for the World Wide Web, HTTP, HTML, and related software/protocols."
6. W3, "Access Authorization in WWW" - "This is the documentation of WWW telnet-level Access Authorization as implemented in October 1993 (Basic) scheme, part of the WWW Common Library). Contains also proposals for encryption level protection (Pubkey scheme proposal and RIPEM based proposal)."
SERVERS
1. "Apache HTTP Server Project" - "The Apache project has been organized in an attempt to answer some of the concerns regarding active development of a public domain HTTP server for UNIX. The goal of this project is to provide a secure, efficient and extensible server which provides HTTP services in sync with the current HTTP standards."
2. "WebSite Central" - This is the official home page of WebSite.
3. Brian Behlendorf, et al., "Running a Perfect Web Site with Apache" (Indianapolis, IN: Que, 1996) - "This book is designed for those who are new to setting up a Web server on a UNIX platform. The featured Web server is Apache, though many of the subjects covered are applicable to other Web servers."
4. CERN, "[Summary of HTTP Error Codes]"
5. David Strom, "WebCompare" - "[T]he leading site for in-depth information on server software for the World Wide Web."
6. NCSA, "NCSA HTTPd Overview" - These pages document the NCSA HTTPd server, the server WebSite is based upon.
7. Roy Fielding, "WWW Protocol Library for Perl" - "libwww-perl is a library of Perl packages/modules which provides a simple and consistent programming interface to the World Wide Web. This library is being developed as a collaborative effort to assist the further development of useful WWW clients and tools."
8. StarNINE, "WebSTAR Product Information" - "WebSTAR(TM) is the industry standard for transforming your Mac into a powerful Web server. WebSTAR can serve millions of connections per day, and is fully extensible through WebSTAR plug-ins."
9. StarNine, "WebSTAR" - Based on Chuck Shotton's MacHTTP, WebSTAR(TM) helps you publish hypertext documents to millions of Web users around the world, right from your Macintosh. You can also use WebSTAR to put any Macintosh file on the Web, including GIF and JPEG images and even QuickTime(TM) movies. And yet, using WebSTAR is as easy as AppleShare(r). Plus, it's faster than many Web servers running on UNIX.
10. Stephen Turner, "Analog" - "Fast, professional WWW logfile analysis for Unix, DOS, NT, Mac and VMS."
WORLD WIDE WEB
1. "World Wide Web" - This URL will take you to a terminal-based WWW browser.
2. "World Wide Web Consortium [W3C]" - The Consortium provides a number of public services: 1) A repository of information about the World Wide Web for developers and users, especially specifications about the Web; 2) A reference code implementation to embody and promote standards 3) Various prototype and sample applications to demonstrate use of new technology.
3. Alan Richmond, "WWW Development"
4. Bob Alberti, et al., "Internet Gopher protocol"
5. CERN European Laboratory for Particle Physics , "CERN Welcome" - CERN is one of the world's largest scientific laboratories and an outstanding example of international collaboration of its many member states. (The acronym CERN comes from the earlier French title: "Conseil Europeen pour la Recherche Nucleaire")
6. CNIDR, "freewais Page"
7. Daniel W. Connolly, "WWW Names and Addresses, URIs, URLs, URNs, URCs" - "Addressing is one of the fundamental technologies in the web. URLs, or Uniform Resouce Locators, are the technology for addressing documents on the web. It is an extensible technology: there are a number of existing addressing schemes, and more may be incorporated over time."
8. Distributed Computing Group within Academic Computing Services of The University of Kansas, "About Lynx"
9. Internet Engineering Task Force (IETF), "HTTP: A protocol for networked information" - HTTP is a protocol with the lightness and speed necessary for a distributed collaborative hypermedia information system. It is a generic stateless object-oriented protocol, which may be used for many similar tasks such as name servers, and distributed object-oriented systems, by extending the commands, or "methods", used. A feature if HTTP is the negotiation of data representation, allowing systems to be built independently of the development of new advanced representations.
10. Karen MacArthur, "World Wide Web Initiative: The Project" - [This site hosts many standard concerning the World Wide Web in general.]
11. Mary Ann Pike, et al., Special Edition Using the Internet with Your Mac (Que: Indianapolis, IN 1995)
12. N. Borenstein, "MIME (Multipurpose Internet Mail Extensions)" - "This document is designed to provide facilities to include multiple objects in a single message, to represent body text in character sets other than US-ASCII, to represent formatted multi- font text messages, to represent non-textual material such as images and audio fragments, and generally to facilitate later extensions defining new types of Internet mail for use by cooperating mail agents.
13. National Center for Supercomputing Applications, "A Beginner's Guide to URLs"
14. NCSA, "NCSA Home Page"
15. NCSA, "NCSA Mosaic Home Page"
16. NCSA, "NCSA Mosaic for the Macintosh Home Page"
17. NCSA, "NCSA Mosaic for Microsoft Windows Home Page"
18. NCSA HTTPd Development Team, "NCSA HTTPd Overview"
19. Software Development Group (SDG) at the National Center for Supercomputing Applications, "SDG Introduction"
20. Thomas Boutell, "World Wide Web FAQ" - "The World Wide Web Frequently Asked Questions (FAQ) is intended to answer the most common questions about the web."
21. Tim Berners-Lee, Roy T. Fielding, and Henrik Frystyk Nielsen, "Hypertext Transfer Protocol" - "The Hypertext Transfer Protocol (HTTP) has been in use by the World-Wide Web global information initiative since 1990. HTTP is an application-level protocol with the lightness and speed necessary for distributed, collaborative, hyper media information systems. It is a generic, stateless, object-oriented protocol which can be used for many tasks, such as name servers and distributed object management systems, through extension of its request methods (commands). A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred."
22. Ulrich Pfeifer, "FreeWAIS-sf"
23. University of Kansas, "KUfact Online Information System"
24. University of Minnesota Computer & Information Services Gopher Consultant service, "Information about gopher"
25. URI working group of the Internet Engineering Task Force, "Uniform Resource Locators"
26. Vannevar Bush, "As We May Think" Atlantic Monthly 176 (July 1945): 101-108
27. WAIS, Inc., "WAIS, Inc."
----------------------------------------------------------------------------
----------------------------------------------------------------------------
WHAT PEOPLE ARE SAYING
These are some comments from the evaluations of the Williamsburg workshop.
o "This was the best $160 my organization has spent to date. I especially appreciated Eric's perspective; a librarian and a computer person. He is also an excellent instructor, he used fabuous analogies and mental models. I was able to see the applicability of everything he spoke about, as well as being able to understand the more technical material."
o "Eric did an outstanding job as workshop leader."
o "Excellent handouts."
o "Eric Lease Morgan is very knowledgeable, and I loved his enthusiasm."
o "Eric is an excellent instructor."
o "The website for this workshop is excellent."
o "Eric is an amazing presenter and instructor. I learned so much more than I expected because of his abilities as a fabulous, well paced, receptive presenter. I wish there were many more like him. This is the best workshop I've ever been to."
o "Very good analogies, ie. French resturant and frisbees."
o "Eric is an excellent presenter. He knows a lot and speaks plainly."
----------------------------------------------------------------------------
----------------------------------------------------------------------------
RELEASE NOTES
This section chronicles the updates to the workshop and handouts.
VERSION 1.4
February 19, 1998
* Added SWISH-E help texts. Fixed spelling (hopefully). Added information about WindowsNT servers. Expanded Interactive assistance.
VERSION 1.3 August 27, 1997
* Added the ability to search the workshop's content using SWISH-E. "Thank you, Roy."
VERSION 1.21 June 16, 1997
* Added a brief Miami travel log
VERSION 1.2 April 16, 1997
* Added the pictures to the Travel log
* Made the entire distribution available as a .tar.gz file
VERSION 1.1.2b April 1, 1997
* Moved the CGI scripts to the ./scripts directory so the HTML can be more portable * Renamed the .pl scripts so they would execute
VERSION 1.1.1b March 31, 1997
* Updated location information
* Add "What People Are Saying"
* Deleted registration form
* Added "Travel Logs"
* Deleted presenters and moved them to Travel Logs
* Moved video description to home page
VERSION 1.0 Febrary 23, 1997 - Initial release.
----------------------------------------------------------------------------
----------------------------------------------------------------------------
TRAVEL LOGS
This section chronicles the locations where the Workshop has taken place.
WILLIAMSBURG MARCH 14, 1997
The day before the workshop, I arrived in Williamsburg. It was the second time in six months that I had the opportunity to visit one of the oldest places in our country. As I strolled up and down the old streets I once again had the chance to soak up the atmosphere provided by the city's William & Mary College, colonial courthouse, and palace. The day of the workshop Jeffery Herrick, Mack Lundy III, Berna Heyman, and Robert Richardson were very helpful making last minute arrangements.
VIDEO CONFERENCES
Since nobody wants to listen to another person talk for six hours, and since I didn't think I could talk for six hours, the workshop tried to facilitate video conference presentations. I worked out a deal with the Connectix corporation where they would give me two video cameras in exchange for plugging their hardware. (BTW, their hardware really does work well!) I then gave the cameras to Jean Armour Polly and Roy Tennant. Jean, Roy, and I practices using CU-SeeMe in anticipation of the big day. Unfortunately, as fate would have it, we had technical difficulties and the video conferences did not take place even after a few very embarrasing minutes of effort. Oh, well. The images and movies below describe what was suppose to happen.
JEAN ARMOUR POLLY
An internationally-recognized expert, Jean was the first to turn the phrase "Surfing the Internet." With extensive public speaking and writing experience, her latest book is entitled The Internet Kids Yellow Pages. One of the first women elected to the Internet Society Board of Trustees, Jean has always been an relentless advocate for the equal access to information.
A true librarian through and through, Jean's home page is at .
ROY TENNANT
Project manager, writer, and teacher, Roy has demonstrated leadership ability within and without the knowledge worker community. He manages the Berkeley Digital Library SunSITE. He's the co-author of Crossing the Internet Threshold: An Instructional Handbook and the author of HTML: A Self-Paced Tutorial. Roy is especially keen in reconizing the abilities of technology for the purposes of information dissemination.
Roy's home page is located at .
PARTICIPANTS
The workshop's structure allowed participants to discuss issues in groups. It was during these group discussions I had the opportunity to capture on film the smiling faces of the attendees.
MIAMI,FL MAY 16, 1997
I arrived Florida International University (FIU) a day ahead of time and set up the workshop in the modern Kovens Conference Center. The setup was very nice since it included a single projection device flanked on either side with ethernet-ready portable computers. In a sentence, FIU provided me with an elegant arrangement. "Thank you!"
The workshop itself went very smoothly. I learned to cut some things from the first workshop (mostly history) and this gave more time for more practical agenda items.
After the workshop, I visited with some of my family and went fishing. I caught a dolphin and then had it for dinner.