Gain access to thousands of new definitions only available to registered users.
Parallel Computing Definitions About: Cloud Computing Amazon Web Services Amazon EC2 Amazon S3 Grid Computing
Parent Topic Definitions About: Technology
Amazon Simple Storage Service (S3) is the distributed storage component of the AWS platform. It can read, write, and delete objects representing data ranging from 1 byte to 5 gigabytes. You can use S3 to store, replicate, and persist an unlimited amount of objects in the cloud. However, you should not think of S3 as a local disk and attempt to run your database from S3. S3 simply stores "objects" or files, in "buckets" (folders). Since there are no directories in S3, each bucket is given a unique identifier. You can also have multiple buckets under one account. Many customers serve static files such as images or video directly from S3 instead of having them stored on a local disk. This gives them virtually infinite storage capacity for their files without purchasing any hardware. For more information visit: http://aws.amazon.com/s3.
refers to specialised companies who provide a very specific stack and support for that stack so as to get rid of the need for any technical headaches, e.g. where the cloud gets rid of the need for predicting growth rates (aka server purchases), SaaS gets rid of the need to maintain, update and support the specific piece of software you are running. While it does not get rid of the need for the local IT department, it does get rid of the need to call your local IT department anytime you want an update done or a bug fixed. In terms of the stack, SaaS offers technical support for all the component layers beneath it. Accordingly, In the case of Fedorazon, we have preconfigured a PaaS stack so anyone who wanted to provide the human stack component layer atop could call themselves a repository SaaS provider. Examples of SaaS: GDocs, WordPress.com, etc.
A high-speed sub-network of shared storage devices. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. Compared to managing hundreds of servers, each with their own disks, SANs reduce system administration overhead. By treating all the company's storage as a single resource, disk maintenance and routine backups are easier to schedule and control. In some SANs, the disks themselves can copy data to other disks for backup without any processing overhead at the host computers.
Utilized by GSI. Open Source Simple Authentication and Security Layer in the C Language. For more information, see http://asg.web.cmu.edu/sasl.
Scale Out is the term usually applied to scaling an application or service through the use of multiple service component instances, which typically resolves to additional operating system instances and/or servers too (plus clustering frameworks of various forms). This is synonymous with Horizontal Scaling. A typical example of a service that scales out is a web server tier of a multi-tier service. See also: Horizontal Scaling
Scale Up is usually applied to scaling an application or service by increasing service performance and/or capacity through making more resources available to an instance of a service or service component, typically within a single instance of an operating environment and/or server. This is synonymous with Vertical Scaling. See also: Vertical Scaling
Term used to describe a job scheduler mechanism to which GRAM interfaces. It is a networked system for submitting, controlling, and monitoring the workload of batch jobs in one or more computers. The jobs or tasks are scheduled for execution at a time chosen by the subsystem according to an available policy and availability of resources. Popular job schedulers include Portable Batch System (PBS), Platform LSF, and IBM LoadLeveler.
The interface used by GRAM to communicate/interact with a job scheduler mechanism. In GT 4.x, this is both the perl submission scripts and the SEG program.
The Scheduler Event Generator (SEG) is a program which uses scheduler-specific monitoring modules to generate job state change events. Depending on scheduler-specific requirements, the SEG may need to run with privileges to enable it to obtain scheduler event notifications. As such, one SEG runs per scheduler resource. For example, on a host which provides access to both PBS and fork jobs, two SEGs, running at (potentially) different privilege levels will be running. One SEG instance exists for any particular scheduled resource instance (one for all homogeneous PBS queues, one for all fork jobs, etc). The SEG is implemented in an executable called the globus-scheduler-event-generator, located in the Globus Toolkit's libexec directory.