뉴미디어와 예술의 확장

2008. 0925-1030 매주 목요일 4-7시    아트센터 나비 www.nabi.co.kr

1. / 디지털 미학 Disital Aesthetics

Skeakers:

진중권(중앙대)

정문열(서강대)

박영욱(국민대)

임태승(성균관대)

 

아트센터나비에서 열리는 이번 포럼은 참으로 의미있었다.

요즘들어 미학에 흠뻑 젖으신 정문열교수님 덕분에 설레이는 마음으로 수업끝나자마자 나비로 향했는데 생각보다 많은 사람들이 이미 도착해서 앞자리와중간까지 다 채웠다. 자리가 꽉 차서 뒤에 의자를 따로 가져다 앉았고 나중엔 그냥 바닥에 앉아서 듣는 사람도 있었다.  다들 반짝이는 눈빛으로 열심히 들을 태새를 하고 있는것이 참 신기했다.

이 바닥에 살고있는  꽤 많은 사람들이 이런 토론의 장이 그리웠던 모양이다. ^^

앞쪽에 살펴보니 정문열교수님께서 항상 그렇듯 상기된 표정으로 다른 패널교수님들과 말씀을 나누고 계셨다. “정교수님 화이팅!”이라도 외치고 싶었지만 사람들이 너무 많은 관계로 참았다. 마음속으로 외쳤으니 된거야 하면서…  

시작하기전 화장실에 잠깐 들러야지 하면서 sk본사 빌딩을 이리저리 구경하는 것도 이상하리 재미났지만 학부때부터 대학원에서까지 지금껏 보고 있는 진중권교수님과의 인연이 참 신기하다고 생각했다. 내가 진교수님께서 좋은 점수라 주장하시는(?)  B+점수를 두번이나 받은 학생이라는 것과 대학원에서 진교수님수업에 출석체크를 맡던, 또 휴강을 알려드리려 전화했던 바로 그 학생이라는 것을 진교수님께서는 기억조차도 못하시겠지만 말이다. ㅎㅎ

생각한 대로 보인다는 시크릿의 힘이 생긴건지 화장실에서 나오면서 진중권교수님을 마추쳤고 황급히 45도로 명랑하게 인사를 했다. 진교수님의 변치않는 말투가 반가웠다.


 첫번째 페이퍼발표는 진중권교수님이셨다. 자연스럽게 마이크를 잡고선 나긋나긋하지만 절대 놓치면 안될 템포로 말을 이어가셨다.

학부때부터 느꼈던 것이지만 진교수님의 말씀은 한번 놓치면 절대 다시 잡을 수 없다. 수업 중 필기를 매우 잘하는 친구의 노트를 봐도 한국말인데 이해할 수 없는 애로사항이 있다는 것. 하지만 적당히 집중한 상태로 말씀의 흐름을 잘 따라가다보면 중반이후부터는 파악이 가능하다.

페이퍼의 내용은 컴퓨터그래픽의 전략을 여러가지 주장으로 말하는 이들을 설명하고 있다.

진중권 교수님의 첫번째 논문 발표가 끝나고 잠깐의 휴식을 취했다. 다들 집중을 열심히 해서인지 쉬는 시간 10분이 지나도 아직 제자리에 들어오지 않은 사람들이 많았다. 기다리고 기다리던 정문열교수님의 발표가 연이어 시작되었다.

 

정문열교수님께서는 밝은 웃음을 띤 얼굴로 강단 앞에 스셨다. 화면에 프리젠테이션이 뜨자 괜시리 내가 긴장이 되었다.

아까부터 마음속으로 외쳤던 교수님 화이팅이 잘 전해지길 바라는 마음으로 발표를 경청하려고 하는데 애꿎은 마이크가 살짝 문제를 일으켜 발표 초반에는 정교수님의 목소리가 안들려 안타까웠다. 그렇게 또 몇 분간을 속태우다가 다행히도 마이크가 제대로 작동해서 정교수님의 목소리를 들을 수 있었다.

이런 상황에도 느끼는 것이지만 예전에 살던 사람들은 어떻게 살았을까 하는 생각이 문득 들었다. ㅎㅎ 확성기를 사용했으려나? 풋..

정문열교수님의 논문발표내용은 “컴퓨터아트: 디지털매체의 화려함이냐 컴퓨터 시스템 구축능력이냐” 라는 주제였다.

내용을 요약하면 컴퓨터아트의 앞으로의 나아갈 방향을 1968년 있었던 사이버네틱스의 우연적 발견이라는 전시에서 그 의미를 가져오게 된다. 여기서의 내용은 아트자체보다 아티스트에 관점을 두고 있는데 컴퓨터를 매체로써만 활용하는 것이 아니라 매커니즘과 알고리즘을 알고 학습하는 아티스트-프로그래머가 되야한다는 것이다.

이 논문의 전체 내용을 교수님의 이메일을 통해 읽었을 때 제목만으로도 나의 가슴을 찌르는 듯 하였다. 컴퓨터를 미디어라고만 생각하고 아티스트가 꼭 프로그래머일 필요는 없다고 안일하게 생각했던 것에 반성을 느꼈다. 과학적 사고의 훈련과 스킬의 중요성에 대해 아티스트들에게 일침을 가하는 정말 자극이 되는 내용이다. 

정교수님의 발표에서는 사이버네틱스라는 단어가 너무 부각되어 설명이 되어 지는 바람에 논문의 핵심적 내용이 다소 약해지는 것이 아닌가 하고 안타까웠다.

그래서인지 발표후에 노소영 관장님께서 사이버네틱스라는 것에 포커스를 맞추어서 “알고리즘아트는 관객과 소통을 하기 힘들다. 아우라가 없다.”는 말씀을 하셨다. 적절한 반론이었다고 생각한다. 그 점에 대해서는 좀 더 시간이 흐르고 더 많은 작업과 실험을 통해 해결해 나가야 한다고 생각한다.

그러기에 더더욱 아티스트-프로그래머가 되어야한다는 정교수님의 주장은 설득력이 있다고 본다.

 

세번째 발표이신 박영욱교수님은 편안한 스타일에 유머러스함과 동시에 엄청난 내적지성을 가지고 계신 분이었다. 겉치례나 형식의 중요함보다는 솔직함과 유머가 듣는 사람들을 단숨에 편안하게 만드는 대단한 매력을 지니고 계셨다. 그래서인지 어렵고 딱딱한 미학적 내용을 쉬운언어지만 확고한 어조로 들려주셨다.

논문의 제목은 “디지털 예술과 미적 가상의 제거 – 전자음악을 중심으로” 라는 내용이였는데 지금껏 나의 관심에서 약간은 벗어나 있던 디지털음악에 대한 새로운 매력을 불러일으켰고 교수님께서 말씀하시는 미적가상의 제거라는 전통적인 예술론을 뒤집는 확실한 예술영역으로 전자음악은 너무나 확실한 증거였다.

 

네번째 발표이신 임태승교수님은 동양철학을 전공하신 분이시다.
디지털에서 정신적인 것이 사라져가고 있는 현실에서 정신적인 것과 동양미학의 인덱스를 설명해주는 의미있는 시간이었다. 노소영관장님 말씀이 고리타분하게 들을지 모르겠지만 정신적이고 동양적인 것이 요즘 더 관심이 가고 중요하기에 임태승교수님을 모셨다고 했다. 하긴 그렇긴 했다. 도착해서 발표진행표와 논문 주제만 보았을 때는 조금 언발런스하다는 생각도 들었다. 하지만 항상 균형은 필요한 것이기에 그런 점에서 오늘 패널리스트는 균형이 잘 이루어졌다고 본다. 임태승 교수님의 말씀 중 평-기-평(坪-伎-坪:점점 기술을 익혀 고수가 되면 단순하게 된다)에 관한 내용은 특히 인상적이었다.  

 토론엔 항상 상반된 관점이 있어야 조화로운 판단을 할 수 있다고 생각한다. 물론 발표 뒤에 있었던 전체토의 시간에 임태승교수님과 정문열교수님이 등을 돌릴 뻔 하시긴 했지만 그날 참석한 사람들은 그 열띤 토론을 들으면서 자신의 작업적 방향과 미학적 판단을 할 수 있는 더할 나위 없는 좋은 토론이었다고 생각한다. 

컴퓨터라는 재료를 선택했으면 그것에 대한 스킬이 필요하고 그것이 가장 잘 표현할 수 있는 내용을 담아서 표현해야 한다. 정교수님께선 그것은 바로 ‘놀이’라고 말씀하시면서 왜 이런 작업을 했냐고 누가 묻는다면 “단지,재밌어서 했습니다.” 라고 말한다는 것. 이것이 바로 임태승교수님이 말하는 평-기-평과 일맥상통하는 것이 아닌가.

토론 마지막에 박영욱 교수님의 빈틈없고 거침없는 마무리가 아직도 잊혀지지 않는다.  대부분의 아티스트들은 테크놀로지의 부족을 아이디어와 컨텐츠로 채우려 한다는 것. 바로 지금 나에게 하신 말씀처럼 들렸다.

마무리를 지으며..

난 진실로 컨텐츠로부터 자유로워지고 싶은 생각이 들었다.  ^^

신고

Visual Intelligence: The First Decade of Computer Art (1965-1975)
Author(s): Frank Dietrich
Source: Leonardo, Vol. 19, No. 2 (1986), pp. 159-169
Published by: The MIT Press
Stable URL: http://www.jstor.org/stable/1578284
Accessed: 27/08/2008 05:16


Abstract-The author traces developments in computer art worldwide from 1965, when the first computer art exhibitions were held by scientists, through succeeding periods in which artists collaborated with scientists to create computer programs for artistic purposes. The end of the first decade of computer art was marked by economic, technological and programming advances that allowed artists more direct access to computers, high quality images and virtually unlimited color choices.

I. INTRODUCTION

The year 1984 has been synonymous with the fundamental Orwellian pessimism of a future mired in technological alienation. Yet one year later, in 1985, we are celebrating the official maturity of an art form born just 20 years ago: computer art. These adjacent dates, one fictional, the other factual, share an intimate relationship with technological development in general and the capabilities of imaging machines in particular.


II. EARLY WORK AND THE FIRST COMPUTER ART SHOWS

Computer art represents a historical breakthrough in computer applications. For the first time computers became involved in an activity that had been the exclusive domain of humans: the act of creation. The number of Ph.D.'s involved emphasized the heavily academic nature of the art form.

The first computer art exhibitions, which ran almost concurrently in 1965 in the US and Germany, were held not by artists at all, but by scientists: Bela Julesz and A. Michael Noll at the Howard Wise Gallery, New York; and Georg Nees and Frieder Nake at Galerie Niedlich, Stuttgart, Germany.

Noll and Julesz conducted their visual research at Bell Laboratories in New Jersey. The Murray Hill lab became one of the hotbeds for the development of computer graphics. Also working there were Manfred Schroeder, Ken Knowlton, Leon Harmon, Frank Sinden and E.E. Zajac, who all belonged to the first generation of 'computer artists'. These scientists were motivated mainly by research related to visual phenomena: visualization of acoustics and the found- ations of binocular vision. Researchers at Bell contributed a wealth of information to spur the growth of computer graphics. Numerous computer animations were produced, mostly for educational purposes, but a few artistic experiments were also conducted. Julesz and Noll worked on the display of stereoscopic images. Noll developed the mathematics for N dimensional projections, as well as a 3D tactile-input device. Harmon and Knowlton devised automatic digitizing methods for images, work related to a project on sampling and plotting of voice data under the direction of Manfred Schroeder. Ken Knowlton contributed several graphics languages for animation. Throughout this article I will discuss the research conducted at Bell Labs in further detail [1-5].

The German center of activity was established at the Technische Universitat Stuttgart under the influence of philosopher Max Bense. Bense's main areas of research covered the history of science and the mathematics of aesthetics. He coined the terms 'artificial art' and 'generative aesthetics' in his main work, Aesthetica [6].

Advances in both computer music and poetry (or text processing) formed the context and also offered initial guidelines for computer art. Computer-generated texts were produced in Stuttgart beginning in 1960. An entire branch of text analysis relied heavily on computer-processed statistics dealing with vocabulary, length and type of sentences, etc. [7].

Electronic music studios predated computer art studios, leading a number of visual artists to seek information about computers from university music departments. Dutch artist Peter Struycken, for instance, took a course in electronic music offered by the composer G.M. Koenig at one of the few European centers, the Instituut voor Sonologie at the Rijks Universiteit at Utrecht [8].

Lejaren A. Hiller programmed the "Illiac Suite" in 1957 on the ILLIAC computer at the University of Illinois, Champaign, and composed the well-known "Computer Cantata" with Robert A. Baker in 1963 [9]. Some musicians who were using computers as a compositional tool also created graphics in an attempt to foster synergism between the arts (Iannas Xenakis, Herbert Briin, etc.)[10-19]. The filmmaker John Whitney, on the other hand, began structuring his computer animations according to harmonies of the musical scale and later called this concept Digital Harmony [20].

Only in a second phase did artists become involved and participate with scientists in three large-scale shows: "Cybernetic Serendipity," organized in London for the Institute of Contemporary Art by Jasia Reichard (1968), "Some More Beginnings," a show organized by Experiments in Art and Technology at the Brooklyn Museum, New York (1968), and "Software," curated by Jack Burnham at the Jewish Museum in New York (1970). The catalogs of these shows still represent some of the best overviews of emerging approaches to computers and other technologies for artistic purposes [21-24].  These shows presented the first results and publicly questioned the relationship of computers and art. They attracted many more artists to the growing field of computer art but did not succeed in making the art world in general more receptive to the new art form.


III. THE TECHNOLOGICAL ARTIST AND THE COMPUTER

When we look at the handful of early computer art aficionados, certain patterns emerge. All scientists and artists in this group belong to the same generation, born between the two World Wars, approximately 1925-1940. Their heritage is not bound by national borders; rather it is international, representing the highly industrialized countries of Europe, North America and Japan.

Initially, artists saw a very utilitarian advantage in using the computer as an accelerator for "high-speed visual thinking" [25].

Robert Mallary calls this the synergistic use of the computer in the context of man-machine interactions. He refers to the computer's "application as a tool for enhancing the on-the-spot creative power and productivity of the artist by accelerating and telescoping the creative process and by making available to its user a multitude of design options that otherwise might not occur to him" [26].

Because these artists were not interested in descriptive or elaborated painting, they could allow themselves to relate to the simple imagery generated by computers. Their interest was fueled by other capabilities of the computer, for instance its ability to allow the artist to be an omnipotent creator of a new universe with its own physical laws. Charles Csuri pointed out this far-reaching concept in an interview: "I can use a well-known physical law as a point of departure, and then, quite arbitrarily, I can change the numerical values, which essentially changes the reality. I can have light travel five times faster than the speed of light, and in a sense put myself in a position of creating my own personal science fiction" [27].

Many of the artists were constructivists. They were accustomed to arrang- ing form and color logically and voluntarily restricting themselves to a few well-defined image elements. They tried to focus on the act of seeing and perception by stripping away any notion of content. The French Groupe de Recherche d'Art Visuel, or GRAV, was a proponent of this direction and became instrumental in the Op Art and Kinetic Art movements of the 60's [28]. Vera Molnar, a cofounder of GRAV, con- ceived a "Machine Imaginaire" to enable her "to produce combinations of forms never seen before, either in nature or in museums, to create unimaginable images."

She realized that the computer could permit her "to go beyond the bounds of learning, cultural heritage, environment -in short, of the social thing, which we must consider to be our second nature" [29].

Conventional aesthetics and their social-psychological connotations were seen as a hindrance to creative visual research. It was precisely the computer's nonhumanness that was understood to free art from these influences. Art critics who pointed out the cool and mechanical look of the first results of computer art did not grasp the implications of this concept.

According to the Japanese artist Hiroshi Kawano, who started producing computer art in 1959 [30], human standards of aesthetics are not applicable to computer art. Instead the works generated by a computer require from the artist (or critic) "a rigorous stoicism against beauty." For Kawano, the computer artist's only function is to teach the computer how to make art by programming an algorithm. Thus the artist/programmer has become a "meta artist," and the executing artist could be the computer itself, the "art-computer" [31].

One artist who explicitly taught the computer is Harold Cohen. He wanted to automate the process of drawing, or, to be more precise, Cohen wanted to have a computer simulate his personal style of drawing. Cohen set out in 1973 to create an expert drawing system at Stanford University's Artificial Intelligence Laboratory. The computer was programmed to model the essence of Cohen's creative strategies. The program contained a large repertoire of various forms and shapes he had been using previously in his paintings. The other main component was a 'space-finder' to establish compositional relationships heuristically between forms on one lane. Well-defined rules and random number generators guaranteed the creation of never-ending variations of drawings with a very distinctive style [32, 33].

Cohen's project AARON is an early example of a functioning harmonic symbiosis between man and machine that enables the team to achieve a top performance. Increasingly sophisticated relationships between artists and com- puters have been classified by Robert Mallary in his article "Computer Sculpture: Six Levels of Cybernetics." He speculates that the computer could develop into an autonomous organism, capable of self-replication. Even if the machines could never actually be "alive," Mallary suggests their potential superiority. "The computer, while not alive in any organic sense, might just as well be, if it were to be judged solely on the basis of its capabilities and performances-which are so superlative that the sculptor, like a child, can only get in the way" [34].

But this prediction sounded like pure fantasy to those trying to enable the computer to assist the artist with very simple tasks. Especially in the beginning, interfacing with computers required artists to collaborate extensively with programmers.



 Fig. 1. A. Michael Noll, Bridget Riley's Painting "Currents", 1966. An early attempt at simulating an existing painting with a computer. Much of 'op art' uses repetitive patterns that usually can be expressed very simply in mathematical terms. These waveforms were generated as parallel sinusoids with linear increasing period and drawn on a microfilm plotter. A. Michael Noll also approximated Piet Mondrian's painting Composition with Lines statistically and created a digital version with pseudorandom numbers. Xerographic reproductions of both pictures were shown to 100 subjects, and the computer-generated picture was preferred by 59.

Fig. 2. Kenneth C. Knowlton and Leon Harmon, Studies in Perception I, 1966. Knowlton and Harmon made this picture at Bell Laboratories in Murray Hill, New Jersey. It is an early example of image processing and probably the first 'computer nude'. It was exhibited in the show "The Machine" at the Museum of Modern Art in New York in 1968. Scanning a photograph, they converted the analog voltages into binary numbers, which were stored on magnetic tape. Another program assigned typographic symbols to these numbers according to halftone densities. Thus the archetype of artistic topics, the nude, is represented by a microcosmos of electronic symbols printed by a microfilm plotter.

Fig. 3. Charles A. Csuri and James Shaffer, Sine Wave Man, 1967. This picture won first prize in Computers and Automation magazine's annual computer art competition in 1967. Initially the artist drew a human face and coded a handful of selected coordinates from the line drawing. This data served as fixed points for the application of Fourier transforms. The result was a number of sine curves with different slopes, although each shares the seed points from the drawing of the face.

Fig. 4. George Nees, Sculpture, 1968. One of the earliest sculptures created completely under computer control. This piece was exhibited at the Biennale in Venice in 1969. Nees had a long-standing interest in the study of artificial visual complexity in connection with the chance-determination reaction. He programmed a Siemens 4004 computer to generate pseudorandom numbers, which were tightly controlled to determine width, length and depth of rectangular objects. The three-dimensional data were stored on magnetic tape and used to drive an automatic milling machine off line. The sculpture was cut from a block of wood.

Fig. 5. Frieder Nake, Matrix Multiplications, 1967. These four pictures reflect the translation of a mathematical process into an aesthetic process. A square matrix was initially filled with numbers. The matrix was multiplied successively by itself, and the resulting new matrices were translated into images of predetermined intervals. Each number was assigned a visual sign with a particular form and color. These signs were placed in a raster according to the numeric values of the matrix. The images were computed on a AEG/Telefunken TR4 programmed in ALGOL 60 and were plotted with a ZUSE Graphomat Z64. A portfolio with 12 drawings was published and sold in 1967 by Edition Hansjoerg Mayer, Stuttgart, Germany.

Fig. 6. Tony Longson, Quarter #5, 1977. As a sculptor Tony Longson got interested in the perception of space and how it is supported by perspective projection and parallax vision. Quarter #5 is the last of a series of pieces started in 1969. All are made of four sheets of Perspex mounted with small intervals on top of each other such that the viewer sees through all four layers. The complete image consists of an arrangement of dots in a rectangular grid. But this image has been decomposed randomly into four sections, so that only a subset of dots is engraved into each sheet of plastic. Viewers approaching the piece will see first an apparently chaotic distribution of dots. However, once the viewer is positioned in one of four fixed viewpoints, chaos will suddenly change into a highly structured order of dots in one of the quadrants [58].


IV. COLLABORATION OF ARTISTS AND SCIENTISTS/PROGRAMMERS/ ENGINEERS

Usually artists cooperated with scientists because only scientists could provide access to computers in industrial research labs and university computer centers. But artists needed the pro- gramming expertise of scientists even more. Some collaborations prospered over many years and led to successful achievements in custom-designed programs for artistic purposes [35]. Even so, Ken Knowlton described the different attitudes of artists and programmers as a major difficulty. In Knowlton's view artists are "illogical, intuitive, and impulsive." They needed programmers who were "constrained, logical, and precise" as translators and interfaces to the computers of the 1960s [36]. But the first of a growing breed of technological artists with hybrid capabilities started to appear, too. Manfred Mohr proudly declared that he was self- taught in computer science, Edvard Zajec learned programming and today teaches it to art students, and Duane Palyka holds degrees in both fine arts and mathematics.


V. THE FIRST GRAPHICS LANGUAGES

Early attempts at graphics languages fell into one of two categories. Either they were graphics subroutines implemented in one of the common programming languages and callable from them, or they were written in machine language and set up their own syntax and command set or vocabulary.

The first graphics extensions, G1, G2 and G3 by Georg Nees, for instance, were written in ALGOL 60 and contained only commands for pen control and random number generators [38].

More elaborate was Mezei's SPARTA, a system of Fortran calls incorporating graphics primitives (line, arc, rectangle, polygon, etc.), different pen attributes (dotted, connected, etc.), and transformations (move, size, rotate).

A further development led to ARTA, an interactive language based on light-pen control. ARTA also provided subrou- tines for key-frame interpolation, allow- ing both the interactive drawing of two key frames and the description of the type of interpolation with a function [39].

A language extension similar to Fortran was GRAF, written by Jack Citreon et al., at IBM; GRAF also offered optional light-pen input [40].

Even though these graphics extensions made programming in machine code superfluous, they still required programming in Fortran or ALGOL 60. In the second group of graphics languages we find Frieder Nake's COM- PART ER 56 as well as Ken Knowlton's languages, BEFLIX and EXPLOR. The names of these programming environ- ments are indicative of either function- or machine-dependent implementation of the language. COMPART ER 56, for instance, refers to a particular computer, the Standard Elektric ER 56, for which the language has been written. ER 56 contained three subpackages, a space organizer, a set of different random number generators, and selectors for the repertoire of graphic elements. Nake used COMPART ER 56 extensively to create more than 100 drawings [30].

BEFLIX (a corruption of Bell Flicks) was designed to produce animated movies on a Stromberg-Carlson 4020 microfilm recorder. Points within a 252- by-184 coordinate system could be controlled, each having one of eight different shades of gray. In contrast with today's frame buffers, which hold the image memory in their bit map, BEFLIX's images resided in the computer's main memory.

As an animation language it provided instructions for several motion effects as well as for camera control. Knowlton had initially hoped that artists would learn the language to program their own movies, but he came to realize that they usually wanted to create something the language could not facilitate, and they also shied away from programming. Therefore he accommo- dated the artists by, for example, writing special extensions to BEFLIX for Stan Vanderbeek, or creating a completely new language, EXPLOR (images from EXplicit Patterns, Local Operations and Randomness), for Lillian Schwarz [41].

None of the graphics languages men- tioned received widespread use, partly because their implementation was mach- ine dependent and also because each language was restricted in scope. The tools were useful for their inventors' goals but lacked sufficient flexibility and ease of use to accommodate the creative ideas of many different artists. BEFLIX, however, was installed in several art departments and provided the helping hand many programmers had given when collaborating individually with artists.

At Ohio State University Charles Csuri directed, over many years, the develop- ment of several graphics languages, all designed for ease of use, interactive control, and animation capabilities. One incarnation, GRASS, or GRAphics Symbiosis System, was designed by Tom DeFanti, first for a real-time vector display and later as ZGRASS for a Z80- based animation language for artists. This language has been widely used by programming artists, which indicates that it possesses sufficient generality to support different imaging strategies as well as suitable command and execution structures to adapt to artistic creation [44, 45].


VI. THE FIRST GRAPHICS SYSTEMS

Let's turn the clock back again to the 1960s when the microcomputer had not even been conceived. The computers used initially were large mainframes, soon followed by minicomputers. Such equipment cost anywhere from $100,000 to several million dollars. All these machines required air conditioning and therefore were located in separate com- puter rooms, which served as fairly uninhabitable 'studios'. Programs and data had to be prepared with the keypunch; then the punch cards were fed into the computer, which ran in batch mode. In general, the systems were not interactive and could produce only still images.

Pen plotters, microfilm plotters and line printers produced most of the visual output. The first animations were created by plotting all still frames of the movie sequentially on a stack of paper or microfilm. Motion could only be re- viewed after these stills had been transferred to 16-mm film and projected. Only a few artists had the opportunity to use even more expensive vector displays introduced in the late 1960s by companies such as IBM or newly founded graphics manufacturers such as Evans & Suther- land, Vector General, and Adage, whose displays cost between $50,000 and $100,000. These displays featured very high addressability, up to 4000 by 4000 points, and could update coordinates fast enough to support real-time animation of wireframe models in 3-D [46].

One of the first computer art shows incorporating these interactive displays took place at the College of the Arts at Ohio State University in 1970 [47].

A more popular and cost-effective choice was the storage display tubes offered by Tektronix, starting around $10,000. But even if only a single line was to be changed, the picture had to be erased completely and redrawn.

With the exception of the microfilm plotter, all output devices were line or vector oriented and thus characterized the majority of early computer art. The microfilm plotter is something of a hybrid between a vector CRT and a raster image device [48].

The CRT beam scans the screen sequentially, turning on and off under computer control. A camera mounted on the CRT makes a time exposure during each scan. Thus the resulting image looks like a photograph taken from a raster device. Some artists used the alphanumeric characters of the line printer to produce shaded areas by overstriking one position with different characters or using the capacity of the eye to integrate these separate microimages into one larger macroimage or supersign. The German artist Klaus Basset attained extraordinarily subtle shading effects by using a simple typewriter. His work clearly relates to computer art, even if he did not employ a computer directly to control the writing process. He programmed himself, so to speak, to execute precisely designed algorithms with a mechanical drawing device [49].

Output was often taken directly from the machine and exhibited. Moreover, the signatures were sometimes plotted by the machine as part of the drawing program. Later, artists used these graphics as sketches for manually produced paintings or copy for photographically transferred serigraphs. Artists also followed tradition and signed their computer-generated work by hand. Color was often introduced only in this later phase ofpostproduction, or a limited range could be achieved by using different plotter pens.

Harold Cohen realized this limitation of the available graphics equipment. His approach was to delegate only the job of drawing to the computer, for which he used a plotter, a CRT, or the funny- looking 'turtle'-a remotely controlled drawing vehicle. He colored the resulting line drawings later by hand, thus mixing automatic drawing and human painting styles. He recently published a series of his computer-generated line drawings in The First Artificial Intelligence Coloring Book as an invitation to everyone to combine their creative efforts with those of the computer [50].

Methods of entering graphics were even more restricted. With hardly any interactive means of controlling the computer, artists had to rely on programs and predefined data. (The technology of light pens and data tablets had already developed, but these input devices were not widely available.) Once the data was fed into the computer, there was no more creative invention. Therefore the design process took place exclusively in the conceptualization prior to running a program. It was at least a decade before Ivan Sutherland's interactive concepts, demonstrated in 1963 with "Sketchpad"[51], resulted in a breakthrough and the ultimate proliferation of paint systems that enabled the artist to draw directly into the computer's memory. Today paint systems, sophisticated 3-D modeling software, and video input provide quick ways to create complex images, but the first generation of computer artists had to focus on logic and mathematics-in short, rather abstract methods. To some degree this restriction brought about creative concepts derived directly from computer technology itself.


VII. THE CONCEPTS

The pioneers of computer art were driven by the newness of the technology, the untouched areas wide open for inventive investigation. Because of the lack of viable commercial applications at that time, they enjoyed the rare freedom to define their own goals, guided only by personal motivation and intuition. This small group of believers had a vision, which has not yet been fulfilled. These artists felt challenged to come up with sophisticated artistic and intellectual concepts to offset the crude computer graphics machines of the mid-1960s with their lack of color, speed and interactivity. It might be fruitful to resolve with today's technology the paradox Manfred Mohr found in his work, a paradox particularly applicable to the early days of computer art: "The paradox of my generative work is that formwise it is minimalist and contentwise it is maximalist" [52].


VIII. COMBINATORICS

The computer was thought of by Nake as a "Universal Picture Generator" capable of creating every possible picture out of a combination of available picture elements and colors [30].

Obviously, a systematic application of the mathematics of combinatorics would lead to an inconceivable number of pictures, both good and bad, and would require an infinite production time in human terms, even if exactly computable. This raised the issue of preselecting a few elements that could be explored exhaustively and presented in a series or cluster of subimages as one piece. Manfred Mohr, for instance, centered his work on the cube and concisely devised successive transformations that modified an ordinary cube. The complex set of possible transformations was then plotted, and the transformations were displayed simultaneously as a single image. A series of catalogs of his work from 1973 to present exquisitely documents the consistent progression of his visual logic [52].


IX. ORDER, CHAOS, AND RANDOMNESS

Other artists chose to investigate the full range between order and chaos, employing random number generators. In this way they could create many different images from one program, introducing change with the random selection of certain parameters to define, for instance, location, type or size of a graphic element [8, 29].

Random numbers served to break the predictability of the computer. They simulated intuition in a very limited fashion and helped overcome the severe restrictions of human interaction with the computer. Random numbers could be constrained within a limited numeric range and then applied to a set of rules of aesthetic relationships. If these rules were derived from an analysis of traditional paintings, the program could simulate a number of similar designs, according to Noll [53] and Nake [30].

Or the artist could set up new rules for generating entire families of new aesthetic configurations, using random numbers to decide where and how to place graphic elements. Peter Struycken recovered the rigorous tradition of the art movement de Stijl and consciously disregarded even abstract forms by focusing exclusively on pure color. In his own words: "Form is an easier conceptual representation and repetition than color. Form can almost always be associated with a form that is already known. How easy it is to connect abstract forms to reality: this is just like a cloud, that like a snake, these like flowers. Form is then regarded as something in itself, where recognition is as important as seeing as such" [8].

To discourage even the faintest notion of content, he reduced the image in "Plons" (Dutch for Splash) to simple squares forming a coordinate system. The computer calculated propagation of color energy emanating from an arbitrary point of initial impact. The changes of color distribution were presented in numerical codes, which the artist translated into actual color paint- ings by hand.


X. MATHEMATIC FUNCTIONS

Numeric evaluations of functions could be plotted directly. Graphs of different functions could be merged, or their points could be connected. These methods relate directly to experiments with analog computing machines by Ben F. Laposky and Herbert W. Franke in the 1950s. They constructed their own imaging systems based on an arrangement of voltage-controlled oscillators. The voltages deflected the beam of oscilloscopes to produce electronic line drawings. Laposky called his images accordingly 'oscillons'. They were photographic time exposures of the CRT display [54].

Entire number fields were drawn with digital computers, which features a control superior to that of analog systems. For instance, artist/scientists would display modular relationships or particular properties such as primeness or various stages of a matrix multiplication approaching its limiting boundaries. The use of mathematics does not necessarily imply a highly geometrical result. Some scientists tried to model irregular patterns. Knowlton, for example, simulated crystal growth, and Manfred R. Schroeder visualized equations describing noise in phone lines. Both experiments relate to the mathematics of fractals so prevalent today for the modeling of natural phenomena [55].


XI. REPRESENTATIONAL IMAGES

Whereas the last type of image visualizes the mathematical behavior of numbers, the numbers could also repre- sent the coordinates of a hand drawing. This data had to be meticulously entered into the computer via punch cards; then various processing methods could be applied to the image data. Leslie Mezei deformed the image progressively into complete noise [56].

Charles Csuri and James Shaffer applied Fourier transforms to a subset of the data samples and generated complex sine functions through those fixed points. The originally digitized drawing, combined with several sine waves, formed the final image of the "Sine Wave Man". The Japanese Computer Graphics Technique Group experimented with the metamorphosis of one image into a completely different one. Thus a realistic face could be completely distorted or gradually transformed into a geometric entity such as a square. The interpolation techniques they were using for the creation of a simple image became the cornerstone of animation.

This animation technique, called key-frame animation, was pioneered at the National Film Board of Canada by Burtnyk and Marcelli Wein [57].

Peter Foldes used interpolation techniques successfully as the major stylistic method in his movie Hunger. Optical scanners automated the task of entering visual data into the computer and in effect revealed the potential of machine vision. These images were stored in the computer in the form of different numbers to encode different gray values, and later color. They could be printed with plotters, using a printing concept similar to that used for halftones, where dots of different sizes and densities are employed.

The artist could relate the gray values of an image to a set of arbitrary visual symbols and print out the converted images. Thus images were created like the nude by Knowlton and Harmon, which on closer inspection is seen to consist of numerous electrical symbols, or an eye whose close-up reveals letters forming the sentence "One picture is worth a thousand words" (Schroeder).



Fig. 7. Herbert W. Franke, Portrait Albert Einstein, 1973. A photograph of Einstein was scanned and digitized optically. The data was stored on tickertape and displayed on a CRT using programs developed for applications in medical diagnosis. Herbert Franke generated the colors with random numbers and by smoothing contour lines. This series of portraits gradually becomes more abstract and thus fuses visually Einstein's portrait and a nuclear blob.

Fig. 8. Edvard Zajec, Prostor, 1968-69. One of a series of drawings from the Prostor program. Zajec was concerned with establishing a design system that could generate a multitude of variations. Each time the program runs, it defines a rectangle and subdivides it according to harmonic proportions. The figure formation inside this composition takes place by selecting successively a line from this set: vertical, horizontal, diagonal and sinusoidal. The parameters for length and amplitude comply with harmonic ratios. The lines connect with each other according to predetermined rules [37].

Fig.9. Manfred Mohr.P159 A,1973. The illussion of a three-dimensional cube is evoked by projecting a set of 12 straigh lines onto the two-dimensional drawing plane. Mohr dissolves the three-dimensional by taking away edges(lines) of the cube consecutively and observes the appearance of new, two-dimensional icons. Inaddition, he introduces rotations and other transformations of the cube to foster visual ambiguity and instability. The dynamics of this process and its visual invention are explored systematically, and each result is drawn as part of a cluster of images representing the complete set of combinations.

Fig. 10. Colette and Charles Bangert, Landlines, 1970. This couple has been collaborating on their computer art for almost two decades. In parallel Colette has continued to draw her art by hand. The investigation of similarities and differences between 'hand work' and computer plots is the foundation of their creative concepts, and findings are fed back into new programs and more sophisticated hand drawings. This methodological approach is tightly coupled with the subject matter of all their work: landscapes. Colette reports: "The elements of both the computer work and my hand work are often repetitive, like leaves, trees, grass and other landscape elements are. There is sameness and similarity, yet everything is changing" [59].

Fig. 11. Klaus Basset, Kubus, 1974. The typewriter graphics are composed of only five signs: 'I', 'O', 'o', 'H' and '%'. By overstriking these type symbols in various combinations, Basset achieved a range of halftones from bright to dark. He used these tones to shade cubic objects, which he calculated and meticulously typed by hand. Recently, Basset started to use a computer with a line printer as an output device. He claims that the typewritten pieces are more precise

 

Fig. 12. Sonia Landy Sheridan, Scientist's Hand at 3M, 1976. This educational masterpiece implicitly pays homage to the major utensil of early programming: the punch card. The image of the hand has been transferred onto the top line of a stack of punch cards by an electrostatic process using a prototype of 3M'S VQC photocopier. Sheridan was trying to show the children of the scientists at 3M's central research labs in St Paul, Minnesota, how the computer stores information on cards and how the image on the cards can be manipulated without even using a machine. This hand can be stretched millions of ways by merely shifting the cards [42, 43].

Fig. 13. Duane Palyka, Painting Sdf-Portrait, 1975. (Photo: Mike Milochek). The artist painted a self-portrait on a computer at the University of Utah Computer Science Department in 1975. In the picture, Palyka is using a simple paint program called Crayon, written in Fortran by Jim Blinn. The program ran under the DOS operatingsystem on a DEC PDP-11/45 using the first Evans & Sutherland frame buffer.


XII. THE END OF AN ERA

The end of the first decade of computer art coincided with important changes due mainly to three technological advances:

   1. The invention of the micro- processor changed the size, price, and accessibility of computers dramatically. The computer could become a truly personal tool.

   2. Interactive systems became common in the creative process. Traditional paradigms of artistic creation such as painting and drawing, photographing and vid-eotaping could be simulated on the computer.

   3.  Raster graphics displays increas- ed the complexity of imagery significantly. Bit-mapped image memories allowed virtually un- limited choice of color an thus s  upported the creation of smoothly shaded or textured three-dimensional images.

These three advances combined to foster the migration of computer technology into art schools and artists' studios as well as commercial production houses. The intimate collaboration between artists and scientists was no longer required. Supported by the emergence of user-friendly general-purpose and high- level graphics language, artists became computer literate, or they bought soft- ware off shelves stocked by a burgeoning computer graphics industry.

Acknowledgements-The following artists and scientists provided valuable information and visuals for this article. Thank you very much Colette and Charles Bangert, Klaus Basset, Jack Burnham, Harold Cohen, Charles A. Csuri, Herbert W. Franke, Kenneth C. Knowlton, Ben F. Laposky, Tony Longson, Robert Mallary, Aaron Marcus, Manfred Mohr, Vera Molnar, Zsuzsa Molnar, Frieder Nake, Georg Nees, A. Michael Noll, Duane Palyka, Manfred R. Schroeder, Lillian Schwartz, Sonia Landy Sheridan, Peter Struycken and Edvard Zajec.


REFERENCES

1. K.C. Knowlton, "A Computer Tech- nique for Producing Animated Movies", Proc. AFIPS Conf., No. 25, Spring 1964, pp. 67-87.

2. K.C. Knowlton, "Computer Animated Movies", Emerging Concepts in Com- puter Graphics, D. Secrest and I. Nievergelt, eds., Benjamin/Cummings Publishing Corp., New York-Amster- dam, 1968, pp. 343-369.

3. E.E. Zajac, "Film Animation by Com- puter", New Scientist, No. 29, pp. 346-349 (1966).

4. A.M. Noll, "Computers and the Visual Arts", Design and Planning, No. 2, M. Krampen and P. Seitz, eds., Hastings House Publishers, Inc., New York, 1967, pp. 65-79.

5. A.M. Noll, "The Digital Computer as a Creative Medium", IEEE Spectrum 4, 89-95 (1967).

6. M. Bense, Aesthetica. Einfuhrung in die Neue Aesthetik, Agis Verlag Baden- Baden, 1965.

7. W. Fucks, Nach Allen Regein der Krust, Deutsche Verlags Anstalt, Stuttgart, 1968.

8. P. Struycken, Structuur-Elementen 1969-1980, catalog. Museum Boymans- van Beuningen, Rotterdam, 1980, translated from Dutch by Bert Speelpennig, p. 8.

9. Isaacson L. Hiller, Experimental Music. McGraw-Hill Book Co., New York, 1959.

10. J. Burnham, Beyond Modern Sculpture. George Braziller, Inc., New York, 1968.

11. D. Davis, Art and the Future. Praeger Publishers, New York, 1973.

12. H.W. Franke, Computer Graphics, Com- puter Art. Phaidon, New York, 1971.

13. S. Kranz, Science and Technology in the Arts. Van Nostrand-Reinhold, New York, 1974.

14. F. Malina, ed., Kinetic Art: Theory and Practice. Selections from the Journal Leonardo. Dover Publications, New York, 1974.

15. F. Malina, ed., Visual Art, Mathematics and Computers. Pergamon Press, Elms- ford, N.Y., 1979.

16. T. Nelson, Dream Machines-Computer Lib. The Distributors, South Bend, Ind., 1974.

17. J. Reichardt, ed., Cybernetics, Art and Ideas. New York Graphic Society, Greenwich, Conn., 1971.

18. R. Russett and C. Starr, Experimental Animation. Van Nostrand-Reinhold, New York, 1976.

19. G. Youngblood, Expanded Cinema. E.P. Dutton & Co., New York, 1970.

20. J. Whitney, Digital Harmony. BYTE Pubs., Peterborough, N.H., 1980.

21. J. Reichardt, ed., "Cybernetic Serendipity: The Computer and the Arts", Studio International Special Issue, London and New York, 1968.

22. B. Kliiver, J. Martin, and R. Rauschen- berg, eds., Some More Beginnings: An Exhibition of Submitted Works Involving Technical Materials and Processes. Brooklyn Museum and the Museum of Modern Art, Experiments in Art and Technology, New York, 1968.

23. J. Burnham, ed., Software: Information Technology: Its New Meaningfor Art. The Jewish Museum, New York, 1970.

24. F. Popper et al., eds., Electra: L'elec- tricite et l'electronic dans 'art au XXe siecle., Mus6e d'Art Moderne de la Ville de Paris, Paris, 1983.

25. M. Mohr, "Artist's Statement", The Computer and its Influence on Art and Design. J.R. Lipsky, ed., catalog, Shel- don Memorial Art Gallery, Univ. of Nebraska, Lincoln, Spring 1983.

26. R. Mallary, "Statement", Computer Art: Hardware and Software Vs. Aesthetics. Coordinators: Charles and Colette Bangert, Proc. Seventh National Sculp- ture Conf., Univ. of Kansas Press, Lawrence, Apr. 1972, pp. 184-189.

27. A. Efland, "An Interview with Charles Csuri", Reichardt, Cybernetic Serendip- ity, p. 84.

28. Groupe de Recherche d'Art Visuel, Proposition sur le Mouvement. Catalog, Galerie Denise Rene, Paris, Jan. 1961.

29. V. Molnar, "Artist's Statement", Page #43, pp. 26-27 (1980).

30. F. Nake, A'sthetik als Informationsverar- beitung, Springer Verlag, Wien-New York, 1974.

31. H. Kawano, "What Is Computer Art?" Artist and Computer, R. Leavitt, ed., Creative Computing, Morristown, N.J., 1976, pp. 112-113.

32. H. Cohen, What Is An Image? IJCAI 6, pp. 1028-1057.

33. H. Cohen, "How to Make a Drawing", Science Colloquium. Nat'l Bur. of Stds., Wash. D.C., Dec. 17, 1982.

34. R. Mallary, "Computer Sculpture: Six Levels of Cybernetics", Artforum, May 1969, pp. 29-35.

35. Colette Bangert and Charles Bangert, "Experiences in Making Drawings by Computer and by Hand", Leonardo 7, 289-296 (1974).

36. K.C. Knowlton, "Statement", Computer Art. Hardware and Software Vs. Aesthet- ics. Coordinators: Charles and Colette Bangert, Proc. Seventh National Sculp- ture Conf., Univ. of Kansas Press, Lawrence, Apr. 1972, pp. 183-184.

37. E. Zajec, "Computer Art: The Binary System for Producing Geometrical Non- figurative Pictures", Leonardo 11, 13-21 (1978).

38. G. Nees, Generative Computergraphik, Siemens AG, Berlin-Munich, 1969.

39. L. Mezei, "SPARTA: A Procedure Oriented Programming Language for the Manipulation of Arbitrary Line Draw- ings", Information Processing 68, Proc. IFIP Congress 69, North-Holland, Ams- terdam, pp. 597-604.

40. J.P. Citroen and J.H. Whitney, "CAMP-Computer Assisted Movie Production", AFIPS Conf. Proc. 33, 1299-1305.

41. K.C. Knowlton, "Collaborations with Artists-A Programmer's Reflection", F. Nake and A. Rosenfeld, eds., Graphic Languages, North-Holland, Amsterdam, 1972, pp. 399-416.

42. Sonia Landy Sheridan, Energized Art- science. Catalog, Museum of Science and Industry, Chicago, 1978.

43. D. Kirkpatrick, "Sonia Landy Sheridan", Woman's Art J. 1, 56-59 (1980).

44. T. DeFanti, "The Digital Component of the Circle Graphics Habitat", Proc. NCC 195-203 (1976).

45. T. DeFanti, "Language Control for Easy Electronic Visualization", BYTE 90-106 (1980).

46. I.E. Sutherland, "Computer Displays", Scientific American June 1970.

47. C. Csuri, ed., Interactive Sound and Visual Systems. Catalog, College of the Arts, Ohio State University, Columbus, 1970.

48. R. Baecker, "Digital Video Displays and Dynamic Graphics", Computer Graphics (Proc. SIGGRAPH 79), 13,48-56 (1979).

49. K. Basset, "Schreibmaschinengrafik", H.W. Franke and G. Jiger, eds., Apparative Kunst. Vom Kaleidoskop zum Computer, DuMont Schauberg, Cologne, 1973, pp. 168-171.

50. H. Cohen et al., eds., The First Artificial Intelligence Coloring Book. Kaufmann, Menlo Park, Calif., 1984.

신고

'만들기 / Programming > research' 카테고리의 다른 글

About Fractal  (1) 2008.10.14
프랙탈 아트에 관하여  (0) 2008.10.14
Visual Intelligence: The First Decade of Computer Art (1965-1975)  (0) 2008.09.27
Color histogram  (0) 2007.07.31
BRDF  (0) 2007.06.28
Eigenvalue, eigenvector and eigenspace  (0) 2007.04.20
사용자 삽입 이미지

그림을 더블클릭해서 봐야함~

사용자 삽입 이미지

그림을 더블클릭해서 봐야함~

사용자 삽입 이미지

그림을 더블클릭해서 봐야함~

사용자 삽입 이미지

그림을 더블클릭해서 봐야함~

사용자 삽입 이미지

그림을 더블클릭해서 봐야함~

신고

'만들기 / making > senser,circuits' 카테고리의 다른 글

LED 종류  (0) 2008.10.01
The NAND gate oscillator  (0) 2008.10.01
논리회로 연산  (0) 2008.09.24
Voltage divider  (0) 2008.09.24
electronic circuits  (0) 2008.09.08
센싱. 과제1) relay에 대한 조사  (0) 2008.09.08

Voltage divider

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Figure 1: Voltage divider
Figure 1: Voltage divider

In electronics, a voltage divider (also known as a potential divider) is a simple linear circuit that produces an output voltage (Vout) that is a fraction of its input voltage (Vin). Voltage division refers to the partitioning of a voltage among the components of the divider.

The formula governing a voltage divider is similar to that for a current divider, but the ratio describing voltage division places the selected impedance in the numerator, unlike current division where it is the unselected components that enter the numerator.

A simple example of a voltage divider consists of two resistors in series or a potentiometer. It is commonly used to create a reference voltage, and may also be used as a signal attenuator at low frequencies.

Contents

[hide]

[edit] General case

A voltage divider referenced to ground is created by connecting two impedances in series, as shown in Figure 1. The input voltage is applied across the series impedances Z1 and Z2 and the output is the voltage across Z2. Z1 and Z2 may be composed of any combination of elements such as resistors, inductors and capacitors.

Applying Ohm's Law, the relationship between the input voltage, Vin, and the output voltage, Vout, can be found:


V_\mathrm{out} =  \frac{Z_2}{Z_1+Z_2} \cdot V_\mathrm{in}

Proof:

V_\mathrm{in} = I\cdot(Z_1+Z_2)
V_\mathrm{out} = I\cdot Z_2
I = V_\mathrm{in}\cdot\frac {1}{Z_1+Z_2}
V_\mathrm{out} = V_\mathrm{in} \cdot\frac {Z_2}{Z_1+Z_2}

The transfer function (also known as the divider's voltage ratio) of this circuit is simply:


H = \frac {V_{out}}{V_{in}} = \frac{Z_2}{Z_1+Z_2}

In general this transfer function is a complex, rational function of frequency.

[edit] Resistive divider

Figure 2: Simple resistive voltage divider
Figure 2: Simple resistive voltage divider

A resistive divider is a special case where both impedances, Z1 and Z2, are purely resistive (Figure 2).

Substituting Z1 = R1 and Z2 = R2 into the previous expression gives:


V_\mathrm{out} =  \frac{R_2}{R_1+R_2} \cdot V_\mathrm{in}

As in the general case, R1 and R2 may be any combination of series/parallel resistors.

[edit] Examples

[edit] Resistive divider

As a simple example, if R1 = R2 then


V_\mathrm{out} = \frac{1}{2} \cdot V_\mathrm{in}

As a more specific and/or practical example, if Vout=6V and Vin=9V (both commonly used voltages), then:


\frac{V_\mathrm{out}}{V_\mathrm{in}} = \frac{R_2}{R_1+R_2} = \frac{6}{9} = \frac{2}{3}

and by solving using algebra, R2 must be twice the value of R1.

To solve for R1:


R_1 = \frac{R_2 \cdot V_\mathrm{in}}{V_\mathrm{out}} - R_2

To solve for R2:


R_2 = \frac{R_1}{\frac{V_\mathrm{in}}{V_\mathrm{out}}-1}

Any ratio between 0 and 1 is possible. That is, using resistors alone it is not possible to either reverse the voltage or increase Vout above Vin

[edit] Low-pass RC filter

Figure 3: Resistor/capacitor voltage divider
Figure 3: Resistor/capacitor voltage divider

Consider a divider consisting of a resistor and capacitor as shown in Figure 3.

Comparing with the general case, we see Z1 = R and Z2 is the impedance of the capacitor, given by

 Z_2 = jX_{\mathrm{C}} =\frac {1} {j \omega C}  \ ,

where XC is the reactance of the capacitor, C is the capacitance of the capacitor, j is the imaginary unit, and ω (omega) is the radian frequency of the input voltage.

This divider will then have the voltage ratio:


{V_\mathrm{out} \over V_\mathrm{in}} =  {Z_\mathrm{2} \over Z_\mathrm{1} + Z_\mathrm{2}} = {{1 \over j \omega C} \over {1 \over j \omega C} + R} = {1 \over 1 +  j \omega \  R C}
.

The product of τ (tau) = RC is called the time constant of the circuit.

The ratio then depends on frequency, in this case decreasing as frequency increases. This circuit is, in fact, a basic (first-order) lowpass filter. The ratio contains an imaginary number, and actually contains both the amplitude and phase shift information of the filter. To extract just the amplitude ratio, calculate the magnitude of the ratio, that is:

 \left| \frac {V_\mathrm{out}} {V_\mathrm{in}} \right|  = \frac {1} {\sqrt { 1 + ( \omega R C )^2 }  } \ .

[edit] Inductive divider:

Inductive dividers split DC input according to resistive divider rules above.

Inductive dividers split AC input according to inductance:

Vout = Vin * [ L2 / ( L1 + L2 ) ]

The above equation is for ideal conditions. In the real world the amount of mutual inductance will alter the results.

[edit] Capacitive divider:

Capacitive dividers do not pass DC input.

For a AC input a simple capacitive equation is:

Vout = Vin * [ C1 / ( C1 + C2 ) ]

Capacitive dividers are limited in current by the capacitance of the elements used.

[edit] Loading effect

The voltage output of a voltage divider is not fixed but varies according to the load. To obtain a reasonably stable output voltage the output current should be a small fraction of the input current. The drawback of this is that most of the input current is wasted as heat in the resistors.

The following example describes the effect when a voltage divider is used to drive an amplifier:

Figure 3: A simple voltage amplifier (gray outline) demonstrating input loading (blue outline) and output loading (green outline)
Figure 3: A simple voltage amplifier (gray outline) demonstrating input loading (blue outline) and output loading (green outline)

The gain of an amplifier generally depends on its source and load terminations, so-called loading effects that reduce the gain. The analysis of the amplifier itself is conveniently treated separately using idealized drivers and loads, and then supplemented by the use of voltage and current division to include the loading effects of real sources and loads.[1] The choice of idealized driver and idealized load depends upon whether current or voltage is the input/output variable for the amplifier at hand, as described next. For more detail on types of amplifier based upon input/output variables, see classification based on input and output variables.

In terms of sources, amplifiers with voltage input (voltage and transconductance amplifiers) typically are characterized using ideal zero-impedance voltage sources. In terms of terminations, amplifiers with voltage output (voltage and transresistance amplifiers) typically are characterized in terms of an open circuit output condition.

Similarly, amplifiers with current input (current and transresistance amplifiers) are characterized using ideal infinite impedance current sources, while amplifiers with current output (current and transconductance amplifiers) are characterized by a short-circuit output condition,

As stated above, when any of these amplifiers is driven by a non-ideal source, and/or terminated by a finite, non-zero load, the effective gain is lowered due to the loading effect at the input and/or the output. Figure 3 illustrates loading by voltage division at both input and output for a simple voltage amplifier. (A current amplifier example is found in the article on current division.) For any of the four types of amplifier (current, voltage, transconductance or transresistance), these loading effects can be understood as a result of voltage division and/or current division, as described next.

[edit] Input loading

A general voltage source can be represented by a Thévenin equivalent circuit with Thévenin series impedance RS. For a Thévenin driver, the input voltage vi is reduced from vS by voltage division to a value

 v_i = v_S \frac {R_{in}} {R_S + R_{in}} ,

where Rin is the amplifier input resistance, and the overall gain is reduced below the idealized gain by the same voltage division factor.

In the same manner, the ideal input current for an ideal driver ii is realized only for an infinite-resistance current driver. For a Norton driver with current iS and source impedance RS, the input current ii is reduced from iS by current division to a value

 i_i = i_S \frac {R_{S}} {R_S + R_{in}} ,

where Rin is the amplifier input resistance, and the overall gain is reduced below the gain estimated using an ideal driver by the same current division factor.

More generally, complex frequency-dependent impedances can be used instead of the driver and amplifier resistances.

[edit] Output loading

For a finite load, RL an output voltage is reduced by voltage division by the factor RL / ( RL + Rout ), where Rout is the amplifier output resistance. Likewise, as the term short-circuit implies, the output current delivered to a load RL is reduced by current division by the factor Rout / ( RL + Rout ). The overall gain is reduced below the gain estimated using an ideal load by the same current division factor.

More generally, complex frequency-dependent impedances can be used instead of the load and amplifier resistances.

[edit] Loaded gain - voltage amplifier case

Including both the input and output voltage division factors for the voltage amplifier of Figure 4, the ideal voltage gain Av realized with an ideal driver and an open-circuit load is reduced to the loaded gain Aloaded:

A_{loaded} =\frac {v_L} {v_S} =  \frac {R_{in}} {R_S+R_{in}}  \frac {R_{L}} {R_{out}+R_{L}} A_v  \ .

The resistor ratios in the above expression are called the loading factors.

Figure 4: G-parameter voltage amplifier two-port; feedback provided by dependent current-controlled current source of gain β A/A
Figure 4: G-parameter voltage amplifier two-port; feedback provided by dependent current-controlled current source of gain β A/A

[edit] Unilateral versus bilateral amplifiers

Figure 3 and the associated discussion refers to a unilateral amplifier. In a more general case where the amplifier is represented by a two port, the input resistance of the amplifier depends on its load, and the output resistance on the source impedance. The loading factors in these cases must employ the true amplifier impedances including these bilateral effects. For example, taking the unilateral voltage amplifier of Figure 3, the corresponding bilateral two-port network is shown in Figure 4 based upon g-parameters.[2] Carrying out the analysis for this circuit, the voltage gain with feedback Afb is found to be

 A_{fb} = \frac {v_L}{v_S} = \frac {A_{loaded}} {1+ {\beta}(R_S/R_L) A_{loaded}} \ .

That is, the ideal current gain Ai is reduced not only by the loading factors, but due to the bilateral nature of the two-port by an additional factor[3] ( 1 + β (RS / RL ) Aloaded ), which is typical of negative feedback amplifier circuits. The factor β (RS / RL ) is the voltage feedback provided by the current feedback source of current gain β A/A. For instance, for an ideal voltage source with RS = 0 Ω, the current feedback has no influence, and for RL = ∞ Ω, there is zero load current, again disabling the feedback.

[edit] Applications

[edit] Reference voltage

Voltage dividers are often used to produce stable reference voltages. The term reference voltage implies that little or no current is drawn from the divider output node by an attached load. Thus, use of the divider as a reference requires a load device with a high input impedance to avoid loading the divider, that is, to avoid disturbing its output voltage. A simple way of avoiding loading (for low power applications) is to simply input the reference voltage into the non-inverting input of an op-amp buffer.

[edit] Voltage source

While voltage dividers may be used to produce precise reference voltages (that is, when no current is drawn from the reference node), they make poor voltage sources (that is, when current is drawn from the reference node). The reason for poor source behavior is that the current drawn by the load passes through resistor R1, but not through R2, causing the voltage drop across R1 to change with the load current, and thereby changing the output voltage.

In terms of the above equations, if current flows into a load resistance RL (attached at the output node where the voltage is Vout), that load resistance must be considered in parallel with R2 to determine the voltage at Vout. In this case, the voltage at Vout is calculated as follows:

 V_\mathrm{out} = \frac {R_2 \| R_\mathrm{L}} { R_1+R_2 \| R_\mathrm{L} }  \cdot V_\mathrm{in}  = \frac {R_2}{R_1 \left( 1 + \frac {R_2}{R_\mathrm{L} } \right) + R_2 } 
\cdot V_\mathrm{in} \ ,

where RL is a load resistor in parallel with R2. From this result it is clear that Vout is decreased by RL unless R2 // RLR2 , that is, unless RL >> R2.

In other words, for high impedance loads it is possible to use a voltage divider as a voltage source, as long as R2 has very small value compared to the load. This technique leads to considerable power dissipation in the divider.

A voltage divider is commonly used to set the DC bias of a common emitter amplifier, where the current drawn from the divider is the relatively low base current of the transistor.

[edit] References

  1. ^ A.S. Sedra and K.C. Smith (2004). Microelectronic circuits, Fifth Edition, New York: Oxford University Press, §1.5 pp. 23-31. ISBN 0-19-514251-9. 
  2. ^ The g-parameter two port is the only one of the standard four choices that has a voltage-controlled voltage source on the output side.
  3. ^ Often called the improvement factor or the desensitivity factor.
신고

'만들기 / making > senser,circuits' 카테고리의 다른 글

LED 종류  (0) 2008.10.01
The NAND gate oscillator  (0) 2008.10.01
논리회로 연산  (0) 2008.09.24
Voltage divider  (0) 2008.09.24
electronic circuits  (0) 2008.09.08
센싱. 과제1) relay에 대한 조사  (0) 2008.09.08

Relay

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Automotive style miniature relay
Automotive style miniature relay

A relay is an electrical switch that opens and closes under the control of another electrical circuit. In the original form, the switch is operated by an electromagnet to open or close one or many sets of contacts. It was invented by Joseph Henry in 1835. Because a relay is able to control an output circuit of higher power than the input circuit, it can be considered to be, in a broad sense, a form of an electrical amplifier.

Contents

[hide]

[edit] Operation

Small relay as used in electronics

When a current flows through the coil, the resulting magnetic field attracts an armature that is mechanically linked to a moving contact. The movement either makes or breaks a connection with a fixed contact. When the current to the coil is switched off, the armature is returned by a force approximately half as strong as the magnetic force to its relaxed position. Usually this is a spring, but gravity is also used commonly in industrial motor starters. Most relays are manufactured to operate quickly. In a low voltage application, this is to reduce noise. In a high voltage or high current application, this is to reduce arcing.

If the coil is energized with DC, a diode is frequently installed across the coil, to dissipate the energy from the collapsing magnetic field at deactivation, which would otherwise generate a spike of voltage and might cause damage to circuit components. Some automotive relays already include that diode inside the relay case. Alternatively a contact protection network, consisting of a capacitor and resistor in series, may absorb the surge. If the coil is designed to be energized with AC, a small copper ring can be crimped to the end of the solenoid. This "shading ring" creates a small out-of-phase current, which increases the minimum pull on the armature during the AC cycle.[1]

By analogy with the functions of the original electromagnetic device, a solid-state relay is made with a thyristor or other solid-state switching device. To achieve electrical isolation an optocoupler can be used which is a light-emitting diode (LED) coupled with a photo transistor.

[edit] Types of relay

[edit] Latching relay

Latching Relay (unboxed)

A latching relay has two relaxed states (bistable). These are also called 'keep' or 'stay' relays. When the current is switched off, the relay remains in its last state. This is achieved with a solenoid operating a ratchet and cam mechanism, or by having two opposing coils with an over-center spring or permanent magnet to hold the armature and contacts in position while the coil is relaxed, or with a remnant core. In the ratchet and cam example, the first pulse to the coil turns the relay on and the second pulse turns it off. In the two coil example, a pulse to one coil turns the relay on and a pulse to the opposite coil turns the relay off. This type of relay has the advantage that it consumes power only for an instant, while it is being switched, and it retains its last setting across a power outage.

[edit] Reed relay

A reed relay has a set of contacts inside a vacuum or inert gas filled glass tube, which protects the contacts against atmospheric corrosion. The contacts are closed by a magnetic field generated when current passes through a coil around the glass tube. Reed relays are capable of faster switching speeds than larger types of relays, but have low switch current and voltage ratings. See also reed switch.

[edit] Mercury-wetted relay

A mercury-wetted reed relay is a form of reed relay in which the contacts are wetted with mercury. Such relays are used to switch low-voltage signals (one volt or less) because of its low contact resistance, or for high-speed counting and timing applications where the mercury eliminates contact bounce. Mercury wetted relays are position-sensitive and must be mounted vertically to work properly. Because of the toxicity and expense of liquid mercury, these relays are rarely specified for new equipment. See also mercury switch.

[edit] Polarized relay

A Polarized Relay placed the armature between the poles of a permanent magnet to increase sensitivity. Polarized relays were used in middle 20th Century telephone exchanges to detect faint pulses and correct telegraphic distortion. The poles were on screws, so a technician could first adjust them for maximum sensitivity and then apply a bias spring to set the critical current that would operate the relay.

[edit] Machine tool relay

A machine tool relay is a type standardized for industrial control of machine tools, transfer machines, and other sequential control. They are characterized by a large number of contacts (sometimes extendable in the field) which are easily converted from normally-open to normally-closed status, easily replaceable coils, and a form factor that allows compactly installing many relays in a control panel. Although such relays once were the backbone of automation in such industries as automobile assembly, the programmable logic controller (PLC) mostly displaced the machine tool relay from sequential control applications.

[edit] Contactor relay

A contactor is a very heavy-duty relay used for switching electric motors and lighting loads. With high current, the contacts are made with pure silver. The unavoidable arcing causes the contacts to oxidize and silver oxide is still a good conductor. Such devices are often used for motor starters. A motor starter is a contactor with overload protection devices attached. The overload sensing devices are a form of heat operated relay where a coil heats a bi-metal strip, or where a solder pot melts, releasing a spring to operate auxiliary contacts. These auxiliary contacts are in series with the coil. If the overload senses excess current in the load, the coil is de-energized. Contactor relays can be extremely loud to operate, making them unfit for use where noise is a chief concern.

[edit] Solid state contactor relay

25 amp or 40 amp solid state contactors
25 amp or 40 amp solid state contactors

A solid state contactor is a very heavy-duty solid state relay, including the necessary heat sink, used for switching electric heaters, small electric motors and lighting loads; where frequent on/off cycles are required. There are no moving parts to wear out and there is no contact bounce due to vibration. They are activated by AC control signals or DC control signals from Programmable logic controller (PLCs), PCs, Transistor-transistor logic (TTL) sources, or other microprocessor controls.

[edit] Buchholz relay

A Buchholz relay is a safety device sensing the accumulation of gas in large oil-filled transformers, which will alarm on slow accumulation of gas or shut down the transformer if gas is produced rapidly in the transformer oil.

[edit] Forced-guided contacts relay

A forced-guided contacts relay has relay contacts that are mechanically linked together, so that when the relay coil is energized or de-energized, all of the linked contacts move together. If one set of contacts in the relay becomes immobilized, no other contact of the same relay will be able to move. The function of forced-guided contacts is to enable the safety circuit to check the status of the relay. Forced-guided contacts are also known as "positive-guided contacts", "captive contacts", "locked contacts", or "safety relays".

A solid state relay, which has no moving parts
A solid state relay, which has no moving parts

[edit] Solid-state relay

A solid state relay (SSR) is a solid state electronic component that provides a similar function to an electromechanical relay but does not have any moving components, increasing long-term reliability. With early SSR's, the tradeoff came from the fact that every transistor has a small voltage drop across it. This collective voltage drop limited the amount of current a given SSR could handle. As transistors improved, higher current SSR's, able to handle 100 to 1,200 amps, have become commercially available. Compared to electromagnetic relays, they may be falsely triggered by transients.

[edit] Overload protection relay

One type of electric motor overload protection relay is operated by a heating element in series with the electric motor . The heat generated by the motor current operates a bi-metal strip or melts solder, releasing a spring to operate contacts. Where the overload relay is exposed to the same environment as the motor, a useful though crude compensation for motor ambient temperature is provided.

[edit] Pole & Throw

Circuit symbols of relays. "C" denotes the common terminal in SPDT and DPDT types.
Circuit symbols of relays. "C" denotes the common terminal in SPDT and DPDT types.
The diagram on the package of a DPDT AC coil relay
The diagram on the package of a DPDT AC coil relay

Since relays are switches, the terminology applied to switches is also applied to relays. A relay will switch one or more poles, each of whose contacts can be thrown by energizing the coil in one of three ways:

  • Normally-open (NO) contacts connect the circuit when the relay is activated; the circuit is disconnected when the relay is inactive. It is also called a Form A contact or "make" contact.
  • Normally-closed (NC) contacts disconnect the circuit when the relay is activated; the circuit is connected when the relay is inactive. It is also called a Form B contact or "break" contact.
  • Change-over (CO), or double-throw (DT), contacts control two circuits: one normally-open contact and one normally-closed contact with a common terminal. It is also called a Form C contact or "transfer" contact ("break before make"). If this type of contact utilizes a "make before break" functionality, then it is called a Form D contact.

The following designations are commonly encountered:

  • SPST - Single Pole Single Throw. These have two terminals which can be connected or disconnected. Including two for the coil, such a relay has four terminals in total. It is ambiguous whether the pole is normally open or normally closed. The terminology "SPNO" and "SPNC" is sometimes used to resolve the ambiguity.
  • SPDT - Single Pole Double Throw. A common terminal connects to either of two others. Including two for the coil, such a relay has five terminals in total.
  • DPST - Double Pole Single Throw. These have two pairs of terminals. Equivalent to two SPST switches or relays actuated by a single coil. Including two for the coil, such a relay has six terminals in total. The poles may be Form A or Form B (or one of each).
  • DPDT - Double Pole Double Throw. These have two rows of change-over terminals. Equivalent to two SPDT switches or relays actuated by a single coil. Such a relay has eight terminals, including the coil.

The "S" or "D" may be replaced with a number, indicating multiple switches connected to a single actuator. For example 4PDT indicates a four pole double throw relay (with 14 terminals).

[edit] Applications

Relays are used:

  • to control a high-voltage circuit with a low-voltage signal, as in some types of modems or audio amplifiers,
  • to control a high-current circuit with a low-current signal, as in the starter solenoid of an automobile,
  • to detect and isolate faults on transmission and distribution lines by opening and closing circuit breakers (protection relays),
A DPDT AC coil relay with "ice cube" packaging
A DPDT AC coil relay with "ice cube" packaging
  • to isolate the controlling circuit from the controlled circuit when the two are at different potentials, for example when controlling a mains-powered device from a low-voltage switch. The latter is often applied to control office lighting as the low voltage wires are easily installed in partitions, which may be often moved as needs change. They may also be controlled by room occupancy detectors in an effort to conserve energy,
  • to perform logic functions. For example, the boolean AND function is realised by connecting NO relay contacts in series, the OR function by connecting NO contacts in parallel. The change-over or Form C contacts perform the XOR (exclusive or) function. Similar functions for NAND and NOR are accomplished using NC contacts. The Ladder programming language is often used for designing relay logic networks.
    • Early computing. Before vacuum tubes and transistors, relays were used as logical elements in digital computers. See ARRA (computer), Harvard Mark II, Zuse Z2, and Zuse Z3.
    • Safety-critical logic. Because relays are much more resistant than semiconductors to nuclear radiation, they are widely used in safety-critical logic, such as the control panels of radioactive waste-handling machinery.
  • to perform time delay functions. Relays can be modified to delay opening or delay closing a set of contacts. A very short (a fraction of a second) delay would use a copper disk between the armature and moving blade assembly. Current flowing in the disk maintains magnetic field for a short time, lengthening release time. For a slightly longer (up to a minute) delay, a dashpot is used. A dashpot is a piston filled with fluid that is allowed to escape slowly. The time period can be varied by increasing or decreasing the flow rate. For longer time periods, a mechanical clockwork timer is installed.

[edit] Relay application considerations

A large relay with two coils and many sets of contacts, used in an old telephone switching system.
A large relay with two coils and many sets of contacts, used in an old telephone switching system.
Several 30-contact relays in "Connector" circuits in mid 20th century 1XB switch and 5XB switch telephone exchanges; cover removed on one
Several 30-contact relays in "Connector" circuits in mid 20th century 1XB switch and 5XB switch telephone exchanges; cover removed on one

Selection of an appropriate relay for a particular application requires evaluation of many different factors:

  • Number and type of contacts - normally open, normally closed, (double-throw)
  • There are two types. This style of relay can be manufactured two different ways. "Make before Break" and "Break before Make". The old style telephone switch required Make-before-break so that the connection didn't get dropped while dialing the number. The railroad still uses them to control railroad crossings.
  • Rating of contacts - small relays switch a few amperes, large contactors are rated for up to 3000 amperes, alternating or direct current
  • Voltage rating of contacts - typical control relays rated 300 VAC or 600 VAC, automotive types to 50 VDC, special high-voltage relays to about 15,000 V
  • Coil voltage - machine-tool relays usually 24 VAC or 120 VAC, relays for switchgear may have 125 V or 250 VDC coils, "sensitive" relays operate on a few milliamperes
  • Package/enclosure - open, touch-safe, double-voltage for isolation between circuits, explosion proof, outdoor, oil-splashresistant
  • Mounting - sockets, plug board, rail mount, panel mount, through-panel mount, enclosure for mounting on walls or equipment
  • Switching time - where high speed is required
  • "Dry" contacts - when switching very low level signals, special contact materials may be needed such as gold-plated contacts
  • Contact protection - suppress arcing in very inductive circuits
  • Coil protection - suppress the surge voltage produced when switching the coil current
  • Isolation between coil circuit and contacts
  • Aerospace or radiation-resistant testing, special quality assurance
  • Expected mechanical loads due to acceleration - some relays used in aerospace applications are designed to function in shock loads of 50 g or more
  • Accessories such as timers, auxiliary contacts, pilot lamps, test buttons
  • Regulatory approvals
  • Stray magnetic linkage between coils of adjacent relays on a printed circuit board.

[edit] Protective relay

A protective relay is a complex electromechanical apparatus, often with more than one coil, designed to calculate operating conditions on an electrical circuit and trip circuit breakers when a fault was found. Unlike switching type relays with fixed and usually ill-defined operating voltage thresholds and operating times, protective relays had well-established, selectable, time/current (or other operating parameter) curves. Such relays were very elaborate, using arrays of induction disks, shaded-pole magnets, operating and restraint coils, solenoid-type operators, telephone-relay style contacts, and phase-shifting networks to allow the relay to respond to such conditions as over-current, over-voltage, reverse power flow, over- and under- frequency, and even distance relays that would trip for faults up to a certain distance away from a substation but not beyond that point. An important transmission line or generator unit would have had cubicles dedicated to protection, with a score of individual electromechanical devices. The various protective functions available on a given relay are denoted by standard ANSI Device Numbers. For example, a relay including function 51 would be a timed overcurrent protective relay.

These protective relays provide various types of electrical protection by detecting abnormal conditions and isolating them from the rest of the electrical system by circuit breaker operation. Such relays may be located at the service entrance or at major load centers.

Design and theory of these protective devices is an important part of the education of an electrical engineer who specializes in power systems. Today these devices are nearly entirely replaced (in new designs) with microprocessor-based instruments (numerical relays) that emulate their electromechanical ancestors with great precision and convenience in application. By combining several functions in one case, numerical relays also save capital cost and maintenance cost over electromechanical relays. However, due to their very long life span, tens of thousands of these "silent sentinels" are still protecting transmission lines and electrical apparatus all over the world.

See also Protective Device Coordination.

Top, middle: reed switches, bottom: reed relay
Top, middle: reed switches, bottom: reed relay

[edit] Overcurrent relay

An "Overcurrent Relay" is a type of protective relay which operates when the load current exceeds a preset value. The ANSI Device Designation Number is 50 for an Instantaneous OverCurrent (IOC), 51 for a Time OverCurrent (TOC). In a typical application the overcurrent relay is used for overcurrent protection, connected to a current transformer and calibrated to operate at or above a specific current level. When the relay operates, one or more contacts will operate and energize a trip coil in a Circuit Breaker and trip (open) the Circuit Breaker.

[edit] Induction disc overcurrent relay

The robust and reliable electromagnetic relays use the induction principle first discovered by Ferraris in the late 19th century. The magnetic system in the induction disc overcurrent relays is designed to simulate overcurrents in a power system and operate with a pre determined time delay when certain overcurrent limits have been reached. In order to operate the magnetic system in the relays produce rotational torque that acts on a metal disc to make contacts, according to the following basic current/torque equation:

T = K x φ1 x φ2 Sinθ

Where

K – is a constant

φ1 and φ2 are the two fluxes

θ is the phase angle between the fluxes

The relays primary winding is supplied from the power systems current transformer via a plug bridge, which is also commonly known as the plug setting multiplier (psm). The variations in the current setting are usually seven equally spaced tappings or operating bands that determine the relays sensitivity. The primary winding is located on the upper electromagnet. The secondary winding has connections on the upper electromagnet that are energised from the primary winding and connected to the lower electromagnet. Once the upper and lower electromagnets are energised they produce eddy currents that are induced onto the metal disc and flow through the flux paths. This relationship of eddy currents and fluxes creates rotational torque proportional to the input current of the primary winding, due to the two flux paths been out of phase by 90º.

Therefore in an overcurrent condition a value of current will be reached that overcomes the control spring pressure on the spindle and the breaking magnet causing the metal disc to rotate moving towards the fixed contact. This initial movement of the disc is also held off to a critical positive value of current by small slots that are often cut into the side of the disc. The time taken for rotation to make the contacts is not only dependent on current but also the spindle backstop position, known as the time multiplier (tm). The time multiplier is divided into 10 linear divisions of the full rotation time.

Providing the relay is free from dirt the metal disc and the spindle with its contact will reach the fixed contact. Sending a signal to trip and isolate the circuit, within its designed time and current specifications. Drop off current of the relay is much lower than its operating value, and once reached the relay will be reset in a reverse motion by the pressure of the control spring governed by the braking magnet.

[edit] Distance relay

The most common form of feeder protection on high voltage transmission systems is distance relay protection. Power lines have set impedance per kilometre and using this value and comparing voltage and current the distance to a fault can be determined. The main types of distance relay protection schemes are:-

  • Three step distance protection
  • Switched distance protection
  • Accelerated or permissive intertrip protection
  • Blocked distance protection

In three step distance protection, the relays are separated into three separate zones of impedance measurement to accommodate for over reach and under reach conditions. Zone 1 is instantaneous in operation and has a purposely set under reach of 80% of the total line length to avoid operation for the next line. This is due to measurements of impedance of lines not being entirely accurate, errors in voltage and current transformers and relay tolerances. These errors can be up to ±20% of the line impedance, hence the zones 80% reach. Zone 2 covers the last 20% of the feeder line length and provides backup to the next line by having a slight over reach. To prevent mal-operation the zone has a 0.5 second time delay. Zone 3 provides backup for the next line and has a time delay of 1 second to grade with zone 2 protection of the next line.

[edit] Double switching

In railway signalling, relays energise to give a green light, so that if the power fails or a wire breaks, the signal goes to red. This is fail safe. To protect against false feeds relay circuits are often cut on both the positive and negative side, so that two false feeds are needed to cause a false green.

[edit] See also

[edit] References

  1. ^ Mason, C. R., Art & Science of Protective Relaying, Chapter 2, GE Consumer & Electrical [1],
  • Gurevich, Vladimir (2005). Electrical Relays: Principles and Applications. London - New York: CRC Press. 
  • Westinghouse Corporation (1976). Applied Protective Relaying. Library of Congress card no. 76-8060. 
  • Terrell Croft and Wilford Summers (ed) (1987). American Electricians' Handbook, Eleventh Edition. New York: McGraw Hill. ISBN 0-07-013932-6. 
  • Walter A. Elmore. Protective Relaying Theory and Applications. Marcel Dekker, Inc.. ISBN 0-8247-9152-5. 
  • Vladimir Gurevich (2008). Electronic Devices on Discrete Components for Industrial and Power Engineering. London - New York: CRC Press, 418. 
  • Vladimir Gurevich (2003). Protection Devices and Systems for High-Voltage Applications. London - New York: CRC Press, 292. 

[edit] External links

Wikimedia Commons has media related to:
Retrieved from "http://en.wikipedia.org/wiki/Relay"




-----------------------naver search--------------------------------------------------------------

계전기 [, relay]

요약
전기회로에서 회로를 두 개로 나누어 한쪽에서 신호를 만들고 그 신호에 따라 다른쪽 회로의 작동을 제어, 즉 회로를 열거나 닫을 필요가 있다. 이때 사용하는 전자부품이 계전기이며 일종의 전기 스위치라 할 수 있다.
본문
계전기 /
릴레이는 그림에서와 같이 전자석, 스프링, 단자 등으로 구성된다. 단자의 개수는 용도에 따라 증가하거나 줄이게 된다. 한쪽 전기회로는 전자석에 연결돼 있다. 전기회로가 전자석에 전류를 공급하지 않으면 단자 C는 스프링의 힘에 의해 단자 B에 연결된다. 그러나 전기회로가 전자석에 전기를 공급하면 전자석은 자기력을 발생시켜 단자 C를 끌어당기게 된다. 결국 단자 C는 단자 A에 연결된다.

예를 들어, 전자석에 시간에 따라 전류가 흘렀다, 안 흘렀다를 반복하는 회로를 연결하고, 단자 C와 단자 A에 전구와 전지를 연결한다면 전구는 회로의 전류 흐름에 따라 켜졌다 꺼졌다를 반복하게 된다. 이를 이용한 것으로 자동차 지시등이 있다.

릴레이는 사용 용도에 따라 위 동작원리를 응용, 변형하여 여러 종류가 있다.
걸쇠릴레이는 릴레이에 신호를 한번 주면 닫힌 상태가 되고, 신호를 끊어도 이 닫힌 상태가 계속 유지된다. 여기에 다시 신호를 주면 열린 상태가 되고 신호를 끊어도 열린 상태가 계속 유지되는 릴레이이다. 리드(reed)릴레이는 접촉단자의 보호 및 빠른 속도 반응을 위해 단자들이 진공이나 불활성 기체속에 넣은 릴레이이다.

편극릴레이는 반응감도를 늘이기 위해 전자석에 의해 움직이는 단자를 영구자석 두극 사이에 위치해 놓는다. 내구성을 위해 전자석이나 움직이는 단자 대신 트랜지스터를 사용한 고체릴레이(solid state relay)가 있다.
신고

'만들기 / making > senser,circuits' 카테고리의 다른 글

LED 종류  (0) 2008.10.01
The NAND gate oscillator  (0) 2008.10.01
논리회로 연산  (0) 2008.09.24
Voltage divider  (0) 2008.09.24
electronic circuits  (0) 2008.09.08
센싱. 과제1) relay에 대한 조사  (0) 2008.09.08